Test Report: Docker_Linux_containerd_arm64 22054

                    
                      83cf6fd59e5d8f3d63346b28bfbd6fd8e1f567be:2025-12-08:42677
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 500.59
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.82
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.24
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.39
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.31
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 733.01
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.23
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.81
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.37
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.68
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.46
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 107.41
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.06
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.28
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.27
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.26
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.28
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.44
358 TestKubernetesUpgrade 795.98
404 TestStartStop/group/no-preload/serial/FirstStart 511.31
437 TestStartStop/group/newest-cni/serial/FirstStart 499.95
438 TestStartStop/group/no-preload/serial/DeployApp 3.14
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 102.21
442 TestStartStop/group/no-preload/serial/SecondStart 370.19
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 90.48
447 TestStartStop/group/newest-cni/serial/SecondStart 374.26
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.84
452 TestStartStop/group/newest-cni/serial/Pause 9.72
460 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 272.22
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1208 00:24:30.128022  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:24:57.838888  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.226924  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.233617  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.245349  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.266807  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.308284  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.389827  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.551415  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.873216  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:29.515248  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:30.797641  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:33.359181  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:38.481531  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:48.723789  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:09.205163  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:50.167024  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:29:12.089552  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:29:30.128468  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.075832909s)

                                                
                                                
-- stdout --
	* [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34883
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34883 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000117256s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000226772s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000226772s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 6 (337.347472ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 00:30:41.892374  890638 status.go:458] kubeconfig endpoint: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh sudo cat /usr/share/ca-certificates/8467112.pem                                                                                           │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image save kicbase/echo-server:functional-932121 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image rm kicbase/echo-server:functional-932121 --alsologtostderr                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image save --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format short --alsologtostderr                                                                                                     │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format yaml --alsologtostderr                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format json --alsologtostderr                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format table --alsologtostderr                                                                                                     │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh pgrep buildkitd                                                                                                                           │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image          │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                          │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete         │ -p functional-932121                                                                                                                                            │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start          │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:22:22
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:22:22.512090  885129 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:22:22.512193  885129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:22:22.512197  885129 out.go:374] Setting ErrFile to fd 2...
	I1208 00:22:22.512201  885129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:22:22.512439  885129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:22:22.512841  885129 out.go:368] Setting JSON to false
	I1208 00:22:22.513726  885129 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18295,"bootTime":1765135047,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:22:22.513785  885129 start.go:143] virtualization:  
	I1208 00:22:22.518377  885129 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:22:22.522245  885129 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:22:22.522325  885129 notify.go:221] Checking for updates...
	I1208 00:22:22.526559  885129 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:22:22.529989  885129 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:22:22.533381  885129 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:22:22.536715  885129 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:22:22.540088  885129 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:22:22.543444  885129 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:22:22.580975  885129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:22:22.581104  885129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:22:22.640454  885129 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-08 00:22:22.631219957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:22:22.640548  885129 docker.go:319] overlay module found
	I1208 00:22:22.643908  885129 out.go:179] * Using the docker driver based on user configuration
	I1208 00:22:22.647008  885129 start.go:309] selected driver: docker
	I1208 00:22:22.647016  885129 start.go:927] validating driver "docker" against <nil>
	I1208 00:22:22.647028  885129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:22:22.647733  885129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:22:22.710733  885129 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-08 00:22:22.700399589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:22:22.710888  885129 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:22:22.711107  885129 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:22:22.714074  885129 out.go:179] * Using Docker driver with root privileges
	I1208 00:22:22.717095  885129 cni.go:84] Creating CNI manager for ""
	I1208 00:22:22.717151  885129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:22:22.717158  885129 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:22:22.717231  885129 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:22:22.722520  885129 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:22:22.725484  885129 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:22:22.728490  885129 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:22:22.731402  885129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:22:22.731445  885129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:22:22.731445  885129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:22:22.731453  885129 cache.go:65] Caching tarball of preloaded images
	I1208 00:22:22.731551  885129 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:22:22.731560  885129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:22:22.731899  885129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:22:22.731917  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json: {Name:mkc1cab28ef3e474ac0a5249c6807f96abc9927d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:22.751608  885129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:22:22.751620  885129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:22:22.751633  885129 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:22:22.751664  885129 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:22:22.751771  885129 start.go:364] duration metric: took 92.169µs to acquireMachinesLock for "functional-386544"
	I1208 00:22:22.751795  885129 start.go:93] Provisioning new machine with config: &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 00:22:22.751875  885129 start.go:125] createHost starting for "" (driver="docker")
	I1208 00:22:22.757220  885129 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1208 00:22:22.757505  885129 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34883 to docker env.
	I1208 00:22:22.757534  885129 start.go:159] libmachine.API.Create for "functional-386544" (driver="docker")
	I1208 00:22:22.757555  885129 client.go:173] LocalClient.Create starting
	I1208 00:22:22.757617  885129 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 00:22:22.757652  885129 main.go:143] libmachine: Decoding PEM data...
	I1208 00:22:22.757683  885129 main.go:143] libmachine: Parsing certificate...
	I1208 00:22:22.757744  885129 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 00:22:22.757761  885129 main.go:143] libmachine: Decoding PEM data...
	I1208 00:22:22.757772  885129 main.go:143] libmachine: Parsing certificate...
	I1208 00:22:22.758161  885129 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 00:22:22.777609  885129 cli_runner.go:211] docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 00:22:22.777708  885129 network_create.go:284] running [docker network inspect functional-386544] to gather additional debugging logs...
	I1208 00:22:22.777725  885129 cli_runner.go:164] Run: docker network inspect functional-386544
	W1208 00:22:22.802644  885129 cli_runner.go:211] docker network inspect functional-386544 returned with exit code 1
	I1208 00:22:22.802670  885129 network_create.go:287] error running [docker network inspect functional-386544]: docker network inspect functional-386544: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-386544 not found
	I1208 00:22:22.802691  885129 network_create.go:289] output of [docker network inspect functional-386544]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-386544 not found
	
	** /stderr **
	I1208 00:22:22.802885  885129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:22:22.822400  885129 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001878f50}
	I1208 00:22:22.822440  885129 network_create.go:124] attempt to create docker network functional-386544 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1208 00:22:22.822528  885129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-386544 functional-386544
	I1208 00:22:22.883900  885129 network_create.go:108] docker network functional-386544 192.168.49.0/24 created
	I1208 00:22:22.883923  885129 kic.go:121] calculated static IP "192.168.49.2" for the "functional-386544" container
	I1208 00:22:22.884002  885129 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 00:22:22.898723  885129 cli_runner.go:164] Run: docker volume create functional-386544 --label name.minikube.sigs.k8s.io=functional-386544 --label created_by.minikube.sigs.k8s.io=true
	I1208 00:22:22.917792  885129 oci.go:103] Successfully created a docker volume functional-386544
	I1208 00:22:22.917869  885129 cli_runner.go:164] Run: docker run --rm --name functional-386544-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-386544 --entrypoint /usr/bin/test -v functional-386544:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 00:22:23.451691  885129 oci.go:107] Successfully prepared a docker volume functional-386544
	I1208 00:22:23.451760  885129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:22:23.451768  885129 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 00:22:23.451853  885129 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-386544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 00:22:27.407008  885129 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-386544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.955118511s)
	I1208 00:22:27.407028  885129 kic.go:203] duration metric: took 3.955257351s to extract preloaded images to volume ...
	W1208 00:22:27.407182  885129 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 00:22:27.407279  885129 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 00:22:27.475291  885129 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-386544 --name functional-386544 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-386544 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-386544 --network functional-386544 --ip 192.168.49.2 --volume functional-386544:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 00:22:27.785554  885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Running}}
	I1208 00:22:27.810912  885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:22:27.836609  885129 cli_runner.go:164] Run: docker exec functional-386544 stat /var/lib/dpkg/alternatives/iptables
	I1208 00:22:27.890159  885129 oci.go:144] the created container "functional-386544" has a running status.
	I1208 00:22:27.890180  885129 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa...
	I1208 00:22:28.001631  885129 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 00:22:28.030968  885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:22:28.058416  885129 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 00:22:28.058428  885129 kic_runner.go:114] Args: [docker exec --privileged functional-386544 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 00:22:28.128624  885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:22:28.160295  885129 machine.go:94] provisionDockerMachine start ...
	I1208 00:22:28.160392  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:28.186554  885129 main.go:143] libmachine: Using SSH client type: native
	I1208 00:22:28.186889  885129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:22:28.186897  885129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:22:28.187495  885129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 00:22:31.338268  885129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:22:31.338282  885129 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:22:31.338348  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:31.360260  885129 main.go:143] libmachine: Using SSH client type: native
	I1208 00:22:31.360575  885129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:22:31.360583  885129 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:22:31.520531  885129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:22:31.520617  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:31.539097  885129 main.go:143] libmachine: Using SSH client type: native
	I1208 00:22:31.539403  885129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:22:31.539420  885129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:22:31.690612  885129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:22:31.690627  885129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:22:31.690657  885129 ubuntu.go:190] setting up certificates
	I1208 00:22:31.690666  885129 provision.go:84] configureAuth start
	I1208 00:22:31.690725  885129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:22:31.707701  885129 provision.go:143] copyHostCerts
	I1208 00:22:31.707761  885129 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:22:31.707769  885129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:22:31.707851  885129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:22:31.707950  885129 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:22:31.707953  885129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:22:31.707979  885129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:22:31.708040  885129 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:22:31.708044  885129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:22:31.708066  885129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:22:31.708116  885129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:22:31.993694  885129 provision.go:177] copyRemoteCerts
	I1208 00:22:31.993751  885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:22:31.993797  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:32.013965  885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:22:32.122251  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:22:32.139425  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:22:32.156636  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:22:32.174174  885129 provision.go:87] duration metric: took 483.486089ms to configureAuth
	I1208 00:22:32.174192  885129 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:22:32.174382  885129 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:22:32.174389  885129 machine.go:97] duration metric: took 4.01408375s to provisionDockerMachine
	I1208 00:22:32.174394  885129 client.go:176] duration metric: took 9.416835024s to LocalClient.Create
	I1208 00:22:32.174407  885129 start.go:167] duration metric: took 9.416876272s to libmachine.API.Create "functional-386544"
	I1208 00:22:32.174412  885129 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:22:32.174421  885129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:22:32.174562  885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:22:32.174599  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:32.192357  885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:22:32.298517  885129 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:22:32.301738  885129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:22:32.301755  885129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:22:32.301765  885129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:22:32.301823  885129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:22:32.301915  885129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:22:32.301990  885129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:22:32.302043  885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:22:32.309721  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:22:32.327056  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:22:32.345724  885129 start.go:296] duration metric: took 171.297297ms for postStartSetup
	I1208 00:22:32.346100  885129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:22:32.363671  885129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:22:32.363950  885129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:22:32.363989  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:32.381065  885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:22:32.483704  885129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:22:32.488672  885129 start.go:128] duration metric: took 9.736783107s to createHost
	I1208 00:22:32.488687  885129 start.go:83] releasing machines lock for "functional-386544", held for 9.736909623s
	I1208 00:22:32.488766  885129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:22:32.509799  885129 out.go:179] * Found network options:
	I1208 00:22:32.512729  885129 out.go:179]   - HTTP_PROXY=localhost:34883
	W1208 00:22:32.515752  885129 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1208 00:22:32.518635  885129 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1208 00:22:32.521423  885129 ssh_runner.go:195] Run: cat /version.json
	I1208 00:22:32.521468  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:32.521498  885129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:22:32.521550  885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:22:32.539803  885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:22:32.548653  885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:22:32.646126  885129 ssh_runner.go:195] Run: systemctl --version
	I1208 00:22:32.745008  885129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:22:32.749524  885129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:22:32.749605  885129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:22:32.777737  885129 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 00:22:32.777751  885129 start.go:496] detecting cgroup driver to use...
	I1208 00:22:32.777786  885129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:22:32.777842  885129 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:22:32.792673  885129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:22:32.805545  885129 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:22:32.805597  885129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:22:32.823501  885129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:22:32.842234  885129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:22:32.953988  885129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:22:33.084770  885129 docker.go:234] disabling docker service ...
	I1208 00:22:33.084824  885129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:22:33.106566  885129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:22:33.120427  885129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:22:33.245224  885129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:22:33.368146  885129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:22:33.382202  885129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:22:33.398161  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:22:33.408136  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:22:33.417496  885129 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:22:33.417558  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:22:33.426603  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:22:33.436212  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:22:33.445084  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:22:33.454235  885129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:22:33.462539  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:22:33.472121  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:22:33.481022  885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:22:33.490531  885129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:22:33.498399  885129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:22:33.506396  885129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:22:33.626744  885129 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:22:33.752269  885129 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:22:33.752359  885129 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:22:33.757435  885129 start.go:564] Will wait 60s for crictl version
	I1208 00:22:33.757492  885129 ssh_runner.go:195] Run: which crictl
	I1208 00:22:33.761189  885129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:22:33.786619  885129 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:22:33.786696  885129 ssh_runner.go:195] Run: containerd --version
	I1208 00:22:33.809236  885129 ssh_runner.go:195] Run: containerd --version
	I1208 00:22:33.834001  885129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:22:33.837045  885129 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:22:33.854269  885129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:22:33.858150  885129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:22:33.867900  885129 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:22:33.868010  885129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:22:33.868073  885129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:22:33.892051  885129 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:22:33.892062  885129 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:22:33.892119  885129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:22:33.920620  885129 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:22:33.920632  885129 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:22:33.920638  885129 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:22:33.920725  885129 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:22:33.920786  885129 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:22:33.945962  885129 cni.go:84] Creating CNI manager for ""
	I1208 00:22:33.945972  885129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:22:33.946002  885129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:22:33.946030  885129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:22:33.946163  885129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:22:33.946243  885129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:22:33.954233  885129 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:22:33.954293  885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:22:33.962121  885129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:22:33.975135  885129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:22:33.988515  885129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 00:22:34.002079  885129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:22:34.009170  885129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 00:22:34.019711  885129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:22:34.131982  885129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:22:34.149574  885129 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:22:34.149584  885129 certs.go:195] generating shared ca certs ...
	I1208 00:22:34.149598  885129 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:34.149768  885129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:22:34.149818  885129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:22:34.149824  885129 certs.go:257] generating profile certs ...
	I1208 00:22:34.149880  885129 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:22:34.149890  885129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt with IP's: []
	I1208 00:22:34.554511  885129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt ...
	I1208 00:22:34.554533  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: {Name:mk8c4f8c2202b6c32ae112dd78671007ae8aced1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:34.554743  885129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key ...
	I1208 00:22:34.554750  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key: {Name:mk192376a5f948ac55d5b18453d878914eefadd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:34.554844  885129 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:22:34.554855  885129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1208 00:22:34.804631  885129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf ...
	I1208 00:22:34.804646  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf: {Name:mk7540ce13d5fd8467ffa88e2f59a8c37e04dfcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:34.804832  885129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf ...
	I1208 00:22:34.804839  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf: {Name:mk0107f76d167d52676a2f955fcc6e93af70104d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:34.804924  885129 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt
	I1208 00:22:34.804998  885129 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key
	I1208 00:22:34.805049  885129 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:22:34.805060  885129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt with IP's: []
	I1208 00:22:35.007292  885129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt ...
	I1208 00:22:35.007310  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt: {Name:mk7784073660c6af0850dd3f80c2a68d59de8031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:35.007540  885129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key ...
	I1208 00:22:35.007548  885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key: {Name:mk8c989a08ec86e3c644d91c8ebe42e8b21d0beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:22:35.007752  885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:22:35.007798  885129 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:22:35.007806  885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:22:35.007840  885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:22:35.007865  885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:22:35.007891  885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:22:35.007935  885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:22:35.008553  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:22:35.029445  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:22:35.048467  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:22:35.066330  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:22:35.085182  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:22:35.104337  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:22:35.122614  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:22:35.141515  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:22:35.160538  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:22:35.178815  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:22:35.197195  885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:22:35.215958  885129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:22:35.229251  885129 ssh_runner.go:195] Run: openssl version
	I1208 00:22:35.236108  885129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:22:35.244050  885129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:22:35.251957  885129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:22:35.255851  885129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:22:35.255911  885129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:22:35.297245  885129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:22:35.304788  885129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 00:22:35.312032  885129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:22:35.319323  885129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:22:35.326671  885129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:22:35.330413  885129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:22:35.330531  885129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:22:35.371864  885129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:22:35.379473  885129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 00:22:35.387024  885129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:22:35.394299  885129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:22:35.401854  885129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:22:35.405481  885129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:22:35.405537  885129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:22:35.446307  885129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:22:35.454437  885129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 00:22:35.462077  885129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:22:35.465776  885129 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 00:22:35.465820  885129 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:22:35.465906  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:22:35.465981  885129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:22:35.496583  885129 cri.go:89] found id: ""
	I1208 00:22:35.496646  885129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:22:35.504659  885129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:22:35.512654  885129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:22:35.512714  885129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:22:35.520589  885129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:22:35.520620  885129 kubeadm.go:158] found existing configuration files:
	
	I1208 00:22:35.520675  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:22:35.528549  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:22:35.528605  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:22:35.536361  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:22:35.544141  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:22:35.544208  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:22:35.551783  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:22:35.560017  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:22:35.560083  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:22:35.567705  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:22:35.575391  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:22:35.575448  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:22:35.582984  885129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:22:35.642032  885129 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:22:35.642123  885129 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:22:35.716738  885129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:22:35.716806  885129 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:22:35.716840  885129 kubeadm.go:319] OS: Linux
	I1208 00:22:35.716883  885129 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:22:35.716930  885129 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:22:35.716976  885129 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:22:35.717022  885129 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:22:35.717070  885129 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:22:35.717129  885129 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:22:35.717175  885129 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:22:35.717222  885129 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:22:35.717267  885129 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:22:35.783782  885129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:22:35.783921  885129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:22:35.784041  885129 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:22:35.789795  885129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:22:35.796171  885129 out.go:252]   - Generating certificates and keys ...
	I1208 00:22:35.796272  885129 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:22:35.796349  885129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:22:35.861882  885129 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 00:22:36.050370  885129 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 00:22:36.241762  885129 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 00:22:36.401587  885129 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 00:22:36.714587  885129 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 00:22:36.714889  885129 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:22:36.815487  885129 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 00:22:36.815889  885129 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1208 00:22:36.879703  885129 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 00:22:37.144293  885129 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 00:22:37.362810  885129 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 00:22:37.362959  885129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:22:37.552787  885129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:22:37.809527  885129 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:22:38.042851  885129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:22:38.705792  885129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:22:38.801202  885129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:22:38.802097  885129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:22:38.804908  885129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:22:38.808308  885129 out.go:252]   - Booting up control plane ...
	I1208 00:22:38.808409  885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:22:38.808781  885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:22:38.810329  885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:22:38.827643  885129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:22:38.827920  885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:22:38.835521  885129 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:22:38.835816  885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:22:38.835990  885129 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:22:38.982514  885129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:22:38.982625  885129 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:26:38.974998  885129 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000117256s
	I1208 00:26:38.975027  885129 kubeadm.go:319] 
	I1208 00:26:38.975129  885129 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:26:38.975238  885129 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:26:38.975572  885129 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:26:38.975579  885129 kubeadm.go:319] 
	I1208 00:26:38.975769  885129 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:26:38.975825  885129 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:26:38.975985  885129 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:26:38.975992  885129 kubeadm.go:319] 
	I1208 00:26:38.980887  885129 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:26:38.981423  885129 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:26:38.981538  885129 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:26:38.981823  885129 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 00:26:38.981827  885129 kubeadm.go:319] 
	I1208 00:26:38.981900  885129 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:26:38.982018  885129 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000117256s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:26:38.982112  885129 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:26:39.396654  885129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:26:39.410542  885129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:26:39.410599  885129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:26:39.418641  885129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:26:39.418651  885129 kubeadm.go:158] found existing configuration files:
	
	I1208 00:26:39.418702  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:26:39.426662  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:26:39.426718  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:26:39.434100  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:26:39.442294  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:26:39.442359  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:26:39.450019  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:26:39.458090  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:26:39.458153  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:26:39.465940  885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:26:39.474176  885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:26:39.474237  885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:26:39.481812  885129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:26:39.523064  885129 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:26:39.523398  885129 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:26:39.602239  885129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:26:39.602300  885129 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:26:39.602333  885129 kubeadm.go:319] OS: Linux
	I1208 00:26:39.602374  885129 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:26:39.602419  885129 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:26:39.602481  885129 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:26:39.602526  885129 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:26:39.602570  885129 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:26:39.602614  885129 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:26:39.602656  885129 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:26:39.602701  885129 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:26:39.602743  885129 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:26:39.665982  885129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:26:39.666105  885129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:26:39.666220  885129 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:26:39.674987  885129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:26:39.678542  885129 out.go:252]   - Generating certificates and keys ...
	I1208 00:26:39.678631  885129 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:26:39.678695  885129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:26:39.678771  885129 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:26:39.678832  885129 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:26:39.678901  885129 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:26:39.678955  885129 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:26:39.679017  885129 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:26:39.679081  885129 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:26:39.679157  885129 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:26:39.679229  885129 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:26:39.679266  885129 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:26:39.679322  885129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:26:39.823877  885129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:26:39.913534  885129 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:26:40.267910  885129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:26:40.705717  885129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:26:40.910523  885129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:26:40.911323  885129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:26:40.914133  885129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:26:40.917460  885129 out.go:252]   - Booting up control plane ...
	I1208 00:26:40.917574  885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:26:40.917674  885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:26:40.919234  885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:26:40.940979  885129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:26:40.941259  885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:26:40.948778  885129 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:26:40.949041  885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:26:40.949081  885129 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:26:41.087781  885129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:26:41.087900  885129 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:30:41.087554  885129 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000226772s
	I1208 00:30:41.087575  885129 kubeadm.go:319] 
	I1208 00:30:41.087631  885129 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:30:41.087664  885129 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:30:41.087801  885129 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:30:41.087813  885129 kubeadm.go:319] 
	I1208 00:30:41.087917  885129 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:30:41.087949  885129 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:30:41.087979  885129 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:30:41.087982  885129 kubeadm.go:319] 
	I1208 00:30:41.092295  885129 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:30:41.092709  885129 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:30:41.092818  885129 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:30:41.093053  885129 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:30:41.093058  885129 kubeadm.go:319] 
	I1208 00:30:41.093124  885129 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:30:41.093177  885129 kubeadm.go:403] duration metric: took 8m5.627362264s to StartCluster
	I1208 00:30:41.093211  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:30:41.093285  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:30:41.130255  885129 cri.go:89] found id: ""
	I1208 00:30:41.130269  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.130276  885129 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:30:41.130281  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:30:41.130357  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:30:41.163078  885129 cri.go:89] found id: ""
	I1208 00:30:41.163093  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.163100  885129 logs.go:284] No container was found matching "etcd"
	I1208 00:30:41.163105  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:30:41.163165  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:30:41.190236  885129 cri.go:89] found id: ""
	I1208 00:30:41.190250  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.190257  885129 logs.go:284] No container was found matching "coredns"
	I1208 00:30:41.190262  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:30:41.190323  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:30:41.217596  885129 cri.go:89] found id: ""
	I1208 00:30:41.217610  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.217618  885129 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:30:41.217623  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:30:41.217686  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:30:41.245271  885129 cri.go:89] found id: ""
	I1208 00:30:41.245285  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.245292  885129 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:30:41.245298  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:30:41.245359  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:30:41.270742  885129 cri.go:89] found id: ""
	I1208 00:30:41.270760  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.270768  885129 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:30:41.270773  885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:30:41.270839  885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:30:41.300287  885129 cri.go:89] found id: ""
	I1208 00:30:41.300302  885129 logs.go:282] 0 containers: []
	W1208 00:30:41.300310  885129 logs.go:284] No container was found matching "kindnet"
	I1208 00:30:41.300318  885129 logs.go:123] Gathering logs for containerd ...
	I1208 00:30:41.300327  885129 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:30:41.339554  885129 logs.go:123] Gathering logs for container status ...
	I1208 00:30:41.339575  885129 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:30:41.375527  885129 logs.go:123] Gathering logs for kubelet ...
	I1208 00:30:41.375543  885129 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:30:41.432450  885129 logs.go:123] Gathering logs for dmesg ...
	I1208 00:30:41.432469  885129 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:30:41.447916  885129 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:30:41.447932  885129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:30:41.515619  885129 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:30:41.506869    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.507467    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.509078    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.509516    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.511082    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:30:41.506869    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.507467    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.509078    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.509516    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:41.511082    4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 00:30:41.515633  885129 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000226772s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:30:41.515663  885129 out.go:285] * 
	W1208 00:30:41.515738  885129 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000226772s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:30:41.515765  885129 out.go:285] * 
	W1208 00:30:41.517900  885129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:30:41.524942  885129 out.go:203] 
	W1208 00:30:41.527855  885129 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000226772s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:30:41.527904  885129 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:30:41.527923  885129 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:30:41.531137  885129 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691456285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691521500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691639794Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691729928Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691793453Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691859209Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691924539Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691984897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.692058899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.692159011Z" level=info msg="Connect containerd service"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.692536040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.693237421Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.703828259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.704048470Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.703965622Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.708322970Z" level=info msg="Start recovering state"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749034666Z" level=info msg="Start event monitor"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749218758Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749302303Z" level=info msg="Start streaming server"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749373844Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749437295Z" level=info msg="runtime interface starting up..."
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749492483Z" level=info msg="starting plugins..."
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749559028Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:22:33 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.751989055Z" level=info msg="containerd successfully booted in 0.083175s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:30:42.569164    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:42.569955    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:42.571679    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:42.572058    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:30:42.573541    4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:30:42 up  5:13,  0 user,  load average: 0.33, 0.75, 1.46
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:30:39 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:39 functional-386544 kubelet[4708]: E1208 00:30:39.634748    4708 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:30:39 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:30:39 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:30:40 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 08 00:30:40 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:40 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:40 functional-386544 kubelet[4714]: E1208 00:30:40.388360    4714 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:30:40 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:30:40 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 00:30:41 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:41 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:41 functional-386544 kubelet[4719]: E1208 00:30:41.156972    4719 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 00:30:41 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:41 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:41 functional-386544 kubelet[4816]: E1208 00:30:41.882898    4816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:30:42 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 00:30:42 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:30:42 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 6 (342.836058ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 00:30:43.036873  890864 status.go:458] kubeconfig endpoint: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1208 00:30:43.054602  846711 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386544 --alsologtostderr -v=8
E1208 00:31:28.221664  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:31:55.931733  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:34:30.128131  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:35:53.200642  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:36:28.221869  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386544 --alsologtostderr -v=8: exit status 80 (6m6.06032383s)

                                                
                                                
-- stdout --
	* [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:30:43.106195  890932 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:30:43.106412  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106440  890932 out.go:374] Setting ErrFile to fd 2...
	I1208 00:30:43.106489  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106802  890932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:30:43.107327  890932 out.go:368] Setting JSON to false
	I1208 00:30:43.108252  890932 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18796,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:30:43.108353  890932 start.go:143] virtualization:  
	I1208 00:30:43.111927  890932 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:30:43.114895  890932 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:30:43.114974  890932 notify.go:221] Checking for updates...
	I1208 00:30:43.121042  890932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:30:43.124118  890932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:43.127146  890932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:30:43.130017  890932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:30:43.132953  890932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:30:43.136385  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:43.136518  890932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:30:43.171722  890932 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:30:43.171844  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.232988  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.222800102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.233101  890932 docker.go:319] overlay module found
	I1208 00:30:43.236209  890932 out.go:179] * Using the docker driver based on existing profile
	I1208 00:30:43.239024  890932 start.go:309] selected driver: docker
	I1208 00:30:43.239046  890932 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.240193  890932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:30:43.240306  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.299458  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.288388391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.299888  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:43.299955  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:43.300012  890932 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.303163  890932 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:30:43.305985  890932 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:30:43.309025  890932 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:30:43.312042  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:43.312102  890932 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:30:43.312113  890932 cache.go:65] Caching tarball of preloaded images
	I1208 00:30:43.312160  890932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:30:43.312254  890932 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:30:43.312266  890932 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:30:43.312379  890932 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:30:43.332475  890932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:30:43.332500  890932 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:30:43.332516  890932 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:30:43.332550  890932 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:30:43.332614  890932 start.go:364] duration metric: took 40.517µs to acquireMachinesLock for "functional-386544"
	I1208 00:30:43.332637  890932 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:30:43.332643  890932 fix.go:54] fixHost starting: 
	I1208 00:30:43.332918  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:43.364362  890932 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:30:43.364391  890932 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:30:43.367522  890932 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:30:43.367561  890932 machine.go:94] provisionDockerMachine start ...
	I1208 00:30:43.367667  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.390594  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.390943  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.390953  890932 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:30:43.546039  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.546064  890932 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:30:43.546132  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.563909  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.564221  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.564240  890932 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:30:43.728055  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.728136  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.746428  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.746778  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.746805  890932 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:30:43.898980  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:30:43.899007  890932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:30:43.899068  890932 ubuntu.go:190] setting up certificates
	I1208 00:30:43.899078  890932 provision.go:84] configureAuth start
	I1208 00:30:43.899155  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:43.917225  890932 provision.go:143] copyHostCerts
	I1208 00:30:43.917271  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917317  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:30:43.917335  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917414  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:30:43.917515  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917537  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:30:43.917547  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917575  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:30:43.917632  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917656  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:30:43.917664  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917691  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:30:43.917796  890932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:30:44.201729  890932 provision.go:177] copyRemoteCerts
	I1208 00:30:44.201799  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:30:44.201847  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.218852  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.326622  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:30:44.326687  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:30:44.345138  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:30:44.345250  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:30:44.363475  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:30:44.363575  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:30:44.382571  890932 provision.go:87] duration metric: took 483.468304ms to configureAuth
	I1208 00:30:44.382643  890932 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:30:44.382843  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:44.382857  890932 machine.go:97] duration metric: took 1.015288541s to provisionDockerMachine
	I1208 00:30:44.382865  890932 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:30:44.382880  890932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:30:44.382939  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:30:44.382987  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.401380  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.506846  890932 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:30:44.510586  890932 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:30:44.510612  890932 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:30:44.510623  890932 command_runner.go:130] > VERSION_ID="12"
	I1208 00:30:44.510628  890932 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:30:44.510633  890932 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:30:44.510637  890932 command_runner.go:130] > ID=debian
	I1208 00:30:44.510641  890932 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:30:44.510646  890932 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:30:44.510652  890932 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:30:44.510734  890932 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:30:44.510755  890932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:30:44.510768  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:30:44.510833  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:30:44.510921  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:30:44.510932  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /etc/ssl/certs/8467112.pem
	I1208 00:30:44.511028  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:30:44.511037  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> /etc/test/nested/copy/846711/hosts
	I1208 00:30:44.511082  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:30:44.518977  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:44.538494  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:30:44.556928  890932 start.go:296] duration metric: took 174.046033ms for postStartSetup
	I1208 00:30:44.557012  890932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:30:44.557057  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.579278  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.683552  890932 command_runner.go:130] > 11%
	I1208 00:30:44.683622  890932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:30:44.688016  890932 command_runner.go:130] > 174G
	I1208 00:30:44.688056  890932 fix.go:56] duration metric: took 1.355411206s for fixHost
	I1208 00:30:44.688067  890932 start.go:83] releasing machines lock for "functional-386544", held for 1.355443108s
	I1208 00:30:44.688146  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:44.705277  890932 ssh_runner.go:195] Run: cat /version.json
	I1208 00:30:44.705345  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.705617  890932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:30:44.705687  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.723084  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.728238  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.826153  890932 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:30:44.826300  890932 ssh_runner.go:195] Run: systemctl --version
	I1208 00:30:44.917784  890932 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:30:44.920412  890932 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:30:44.920484  890932 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:30:44.920574  890932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:30:44.924900  890932 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:30:44.925095  890932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:30:44.925215  890932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:30:44.933474  890932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:30:44.933497  890932 start.go:496] detecting cgroup driver to use...
	I1208 00:30:44.933530  890932 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:30:44.933580  890932 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:30:44.950010  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:30:44.963687  890932 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:30:44.963783  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:30:44.980391  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:30:44.994304  890932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:30:45.255981  890932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:30:45.407305  890932 docker.go:234] disabling docker service ...
	I1208 00:30:45.407423  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:30:45.423468  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:30:45.437222  890932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:30:45.561603  890932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:30:45.705878  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:30:45.719726  890932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:30:45.733506  890932 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1208 00:30:45.735147  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:30:45.744694  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:30:45.753960  890932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:30:45.754081  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:30:45.763511  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.772723  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:30:45.781584  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.790600  890932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:30:45.799135  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:30:45.808317  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:30:45.817244  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:30:45.826211  890932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:30:45.833037  890932 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:30:45.834008  890932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:30:45.841603  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:45.965344  890932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:30:46.100261  890932 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:30:46.100385  890932 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:30:46.104210  890932 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1208 00:30:46.104295  890932 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:30:46.104358  890932 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1208 00:30:46.104385  890932 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:46.104410  890932 command_runner.go:130] > Access: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104446  890932 command_runner.go:130] > Modify: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104470  890932 command_runner.go:130] > Change: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104490  890932 command_runner.go:130] >  Birth: -
	I1208 00:30:46.104859  890932 start.go:564] Will wait 60s for crictl version
	I1208 00:30:46.104961  890932 ssh_runner.go:195] Run: which crictl
	I1208 00:30:46.108543  890932 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:30:46.108924  890932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:30:46.136367  890932 command_runner.go:130] > Version:  0.1.0
	I1208 00:30:46.136449  890932 command_runner.go:130] > RuntimeName:  containerd
	I1208 00:30:46.136470  890932 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1208 00:30:46.136491  890932 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:30:46.136542  890932 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:30:46.136636  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.156742  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.159302  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.181269  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.189080  890932 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:30:46.192076  890932 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:30:46.209081  890932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:30:46.212923  890932 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:30:46.213097  890932 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:30:46.213209  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:46.213289  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.236482  890932 command_runner.go:130] > {
	I1208 00:30:46.236506  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.236511  890932 command_runner.go:130] >     {
	I1208 00:30:46.236520  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.236526  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236531  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.236534  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236538  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236551  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.236558  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236563  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.236571  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236576  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236586  890932 command_runner.go:130] >     },
	I1208 00:30:46.236590  890932 command_runner.go:130] >     {
	I1208 00:30:46.236601  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.236605  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236610  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.236617  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236622  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236632  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.236641  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236646  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.236649  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236654  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236664  890932 command_runner.go:130] >     },
	I1208 00:30:46.236668  890932 command_runner.go:130] >     {
	I1208 00:30:46.236675  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.236679  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236687  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.236690  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236699  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236718  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.236722  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236728  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.236733  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.236740  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236743  890932 command_runner.go:130] >     },
	I1208 00:30:46.236746  890932 command_runner.go:130] >     {
	I1208 00:30:46.236753  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.236760  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236766  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.236769  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236773  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236781  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.236788  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236792  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.236803  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236808  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236814  890932 command_runner.go:130] >       },
	I1208 00:30:46.236821  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236825  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236828  890932 command_runner.go:130] >     },
	I1208 00:30:46.236832  890932 command_runner.go:130] >     {
	I1208 00:30:46.236841  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.236847  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236853  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.236856  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236860  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236868  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.236874  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236879  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.236885  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236894  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236901  890932 command_runner.go:130] >       },
	I1208 00:30:46.236908  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236912  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236916  890932 command_runner.go:130] >     },
	I1208 00:30:46.236926  890932 command_runner.go:130] >     {
	I1208 00:30:46.236933  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.236937  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236943  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.236947  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236951  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236962  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.236968  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236972  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.236976  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236980  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236989  890932 command_runner.go:130] >       },
	I1208 00:30:46.236993  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236997  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237002  890932 command_runner.go:130] >     },
	I1208 00:30:46.237005  890932 command_runner.go:130] >     {
	I1208 00:30:46.237012  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.237017  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237024  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.237027  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237032  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237040  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.237047  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237055  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.237059  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237063  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237066  890932 command_runner.go:130] >     },
	I1208 00:30:46.237076  890932 command_runner.go:130] >     {
	I1208 00:30:46.237084  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.237095  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237104  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.237107  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237112  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237120  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.237126  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237131  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.237134  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237139  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.237142  890932 command_runner.go:130] >       },
	I1208 00:30:46.237146  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237153  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237157  890932 command_runner.go:130] >     },
	I1208 00:30:46.237166  890932 command_runner.go:130] >     {
	I1208 00:30:46.237173  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.237178  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237182  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.237189  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237193  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237201  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.237206  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237210  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.237216  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237221  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.237227  890932 command_runner.go:130] >       },
	I1208 00:30:46.237231  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237235  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.237238  890932 command_runner.go:130] >     }
	I1208 00:30:46.237241  890932 command_runner.go:130] >   ]
	I1208 00:30:46.237244  890932 command_runner.go:130] > }
	I1208 00:30:46.239834  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.239857  890932 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:30:46.239919  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.262227  890932 command_runner.go:130] > {
	I1208 00:30:46.262250  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.262255  890932 command_runner.go:130] >     {
	I1208 00:30:46.262265  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.262280  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262286  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.262289  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262293  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262303  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.262310  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262315  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.262319  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262323  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262326  890932 command_runner.go:130] >     },
	I1208 00:30:46.262330  890932 command_runner.go:130] >     {
	I1208 00:30:46.262348  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.262357  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262363  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.262366  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262370  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262381  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.262386  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262392  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.262396  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262400  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262403  890932 command_runner.go:130] >     },
	I1208 00:30:46.262406  890932 command_runner.go:130] >     {
	I1208 00:30:46.262413  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.262427  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262439  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.262476  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262489  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262498  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.262502  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262506  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.262513  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.262517  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262524  890932 command_runner.go:130] >     },
	I1208 00:30:46.262531  890932 command_runner.go:130] >     {
	I1208 00:30:46.262539  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.262542  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262548  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.262553  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262557  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262565  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.262568  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262572  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.262579  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262583  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262588  890932 command_runner.go:130] >       },
	I1208 00:30:46.262592  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262605  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262609  890932 command_runner.go:130] >     },
	I1208 00:30:46.262612  890932 command_runner.go:130] >     {
	I1208 00:30:46.262619  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.262625  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262631  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.262634  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262638  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262646  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.262649  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262654  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.262660  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262678  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262686  890932 command_runner.go:130] >       },
	I1208 00:30:46.262690  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262694  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262697  890932 command_runner.go:130] >     },
	I1208 00:30:46.262701  890932 command_runner.go:130] >     {
	I1208 00:30:46.262707  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.262718  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262724  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.262727  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262731  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262739  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.262745  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262749  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.262755  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262759  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262772  890932 command_runner.go:130] >       },
	I1208 00:30:46.262776  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262780  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262783  890932 command_runner.go:130] >     },
	I1208 00:30:46.262786  890932 command_runner.go:130] >     {
	I1208 00:30:46.262793  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.262800  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262805  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.262809  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262812  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262819  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.262823  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262827  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.262834  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262838  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262844  890932 command_runner.go:130] >     },
	I1208 00:30:46.262848  890932 command_runner.go:130] >     {
	I1208 00:30:46.262857  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.262867  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262876  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.262882  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262886  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262893  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.262907  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262915  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.262919  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262922  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262929  890932 command_runner.go:130] >       },
	I1208 00:30:46.262933  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262943  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262947  890932 command_runner.go:130] >     },
	I1208 00:30:46.262950  890932 command_runner.go:130] >     {
	I1208 00:30:46.262957  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.262963  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262968  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.262971  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262975  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262982  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.262985  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262990  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.262996  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.263000  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.263013  890932 command_runner.go:130] >       },
	I1208 00:30:46.263017  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.263021  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.263024  890932 command_runner.go:130] >     }
	I1208 00:30:46.263027  890932 command_runner.go:130] >   ]
	I1208 00:30:46.263031  890932 command_runner.go:130] > }
	I1208 00:30:46.265493  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.265517  890932 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:30:46.265524  890932 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:30:46.265625  890932 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:30:46.265699  890932 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:30:46.291229  890932 command_runner.go:130] > {
	I1208 00:30:46.291250  890932 command_runner.go:130] >   "cniconfig": {
	I1208 00:30:46.291256  890932 command_runner.go:130] >     "Networks": [
	I1208 00:30:46.291260  890932 command_runner.go:130] >       {
	I1208 00:30:46.291266  890932 command_runner.go:130] >         "Config": {
	I1208 00:30:46.291271  890932 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1208 00:30:46.291283  890932 command_runner.go:130] >           "Name": "cni-loopback",
	I1208 00:30:46.291288  890932 command_runner.go:130] >           "Plugins": [
	I1208 00:30:46.291292  890932 command_runner.go:130] >             {
	I1208 00:30:46.291297  890932 command_runner.go:130] >               "Network": {
	I1208 00:30:46.291301  890932 command_runner.go:130] >                 "ipam": {},
	I1208 00:30:46.291307  890932 command_runner.go:130] >                 "type": "loopback"
	I1208 00:30:46.291311  890932 command_runner.go:130] >               },
	I1208 00:30:46.291322  890932 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1208 00:30:46.291326  890932 command_runner.go:130] >             }
	I1208 00:30:46.291334  890932 command_runner.go:130] >           ],
	I1208 00:30:46.291344  890932 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1208 00:30:46.291348  890932 command_runner.go:130] >         },
	I1208 00:30:46.291356  890932 command_runner.go:130] >         "IFName": "lo"
	I1208 00:30:46.291362  890932 command_runner.go:130] >       }
	I1208 00:30:46.291366  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291371  890932 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1208 00:30:46.291375  890932 command_runner.go:130] >     "PluginDirs": [
	I1208 00:30:46.291379  890932 command_runner.go:130] >       "/opt/cni/bin"
	I1208 00:30:46.291390  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291395  890932 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1208 00:30:46.291398  890932 command_runner.go:130] >     "Prefix": "eth"
	I1208 00:30:46.291402  890932 command_runner.go:130] >   },
	I1208 00:30:46.291411  890932 command_runner.go:130] >   "config": {
	I1208 00:30:46.291415  890932 command_runner.go:130] >     "cdiSpecDirs": [
	I1208 00:30:46.291419  890932 command_runner.go:130] >       "/etc/cdi",
	I1208 00:30:46.291427  890932 command_runner.go:130] >       "/var/run/cdi"
	I1208 00:30:46.291432  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291436  890932 command_runner.go:130] >     "cni": {
	I1208 00:30:46.291448  890932 command_runner.go:130] >       "binDir": "",
	I1208 00:30:46.291453  890932 command_runner.go:130] >       "binDirs": [
	I1208 00:30:46.291457  890932 command_runner.go:130] >         "/opt/cni/bin"
	I1208 00:30:46.291460  890932 command_runner.go:130] >       ],
	I1208 00:30:46.291464  890932 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1208 00:30:46.291468  890932 command_runner.go:130] >       "confTemplate": "",
	I1208 00:30:46.291472  890932 command_runner.go:130] >       "ipPref": "",
	I1208 00:30:46.291475  890932 command_runner.go:130] >       "maxConfNum": 1,
	I1208 00:30:46.291479  890932 command_runner.go:130] >       "setupSerially": false,
	I1208 00:30:46.291483  890932 command_runner.go:130] >       "useInternalLoopback": false
	I1208 00:30:46.291487  890932 command_runner.go:130] >     },
	I1208 00:30:46.291492  890932 command_runner.go:130] >     "containerd": {
	I1208 00:30:46.291499  890932 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1208 00:30:46.291504  890932 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1208 00:30:46.291509  890932 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1208 00:30:46.291515  890932 command_runner.go:130] >       "runtimes": {
	I1208 00:30:46.291519  890932 command_runner.go:130] >         "runc": {
	I1208 00:30:46.291527  890932 command_runner.go:130] >           "ContainerAnnotations": null,
	I1208 00:30:46.291533  890932 command_runner.go:130] >           "PodAnnotations": null,
	I1208 00:30:46.291545  890932 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1208 00:30:46.291550  890932 command_runner.go:130] >           "cgroupWritable": false,
	I1208 00:30:46.291554  890932 command_runner.go:130] >           "cniConfDir": "",
	I1208 00:30:46.291558  890932 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1208 00:30:46.291564  890932 command_runner.go:130] >           "io_type": "",
	I1208 00:30:46.291568  890932 command_runner.go:130] >           "options": {
	I1208 00:30:46.291576  890932 command_runner.go:130] >             "BinaryName": "",
	I1208 00:30:46.291580  890932 command_runner.go:130] >             "CriuImagePath": "",
	I1208 00:30:46.291588  890932 command_runner.go:130] >             "CriuWorkPath": "",
	I1208 00:30:46.291593  890932 command_runner.go:130] >             "IoGid": 0,
	I1208 00:30:46.291599  890932 command_runner.go:130] >             "IoUid": 0,
	I1208 00:30:46.291604  890932 command_runner.go:130] >             "NoNewKeyring": false,
	I1208 00:30:46.291615  890932 command_runner.go:130] >             "Root": "",
	I1208 00:30:46.291619  890932 command_runner.go:130] >             "ShimCgroup": "",
	I1208 00:30:46.291624  890932 command_runner.go:130] >             "SystemdCgroup": false
	I1208 00:30:46.291627  890932 command_runner.go:130] >           },
	I1208 00:30:46.291641  890932 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1208 00:30:46.291648  890932 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1208 00:30:46.291655  890932 command_runner.go:130] >           "runtimePath": "",
	I1208 00:30:46.291660  890932 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1208 00:30:46.291664  890932 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1208 00:30:46.291668  890932 command_runner.go:130] >           "snapshotter": ""
	I1208 00:30:46.291672  890932 command_runner.go:130] >         }
	I1208 00:30:46.291675  890932 command_runner.go:130] >       }
	I1208 00:30:46.291678  890932 command_runner.go:130] >     },
	I1208 00:30:46.291689  890932 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1208 00:30:46.291698  890932 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1208 00:30:46.291705  890932 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1208 00:30:46.291709  890932 command_runner.go:130] >     "disableApparmor": false,
	I1208 00:30:46.291714  890932 command_runner.go:130] >     "disableHugetlbController": true,
	I1208 00:30:46.291721  890932 command_runner.go:130] >     "disableProcMount": false,
	I1208 00:30:46.291726  890932 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1208 00:30:46.291730  890932 command_runner.go:130] >     "enableCDI": true,
	I1208 00:30:46.291740  890932 command_runner.go:130] >     "enableSelinux": false,
	I1208 00:30:46.291745  890932 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1208 00:30:46.291749  890932 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1208 00:30:46.291753  890932 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1208 00:30:46.291758  890932 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1208 00:30:46.291763  890932 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1208 00:30:46.291770  890932 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1208 00:30:46.291775  890932 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1208 00:30:46.291789  890932 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291798  890932 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1208 00:30:46.291803  890932 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291810  890932 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1208 00:30:46.291819  890932 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1208 00:30:46.291823  890932 command_runner.go:130] >   },
	I1208 00:30:46.291827  890932 command_runner.go:130] >   "features": {
	I1208 00:30:46.291831  890932 command_runner.go:130] >     "supplemental_groups_policy": true
	I1208 00:30:46.291835  890932 command_runner.go:130] >   },
	I1208 00:30:46.291839  890932 command_runner.go:130] >   "golang": "go1.24.9",
	I1208 00:30:46.291850  890932 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291862  890932 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291866  890932 command_runner.go:130] >   "runtimeHandlers": [
	I1208 00:30:46.291870  890932 command_runner.go:130] >     {
	I1208 00:30:46.291874  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291886  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291890  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291893  890932 command_runner.go:130] >       }
	I1208 00:30:46.291897  890932 command_runner.go:130] >     },
	I1208 00:30:46.291907  890932 command_runner.go:130] >     {
	I1208 00:30:46.291911  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291916  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291919  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291922  890932 command_runner.go:130] >       },
	I1208 00:30:46.291926  890932 command_runner.go:130] >       "name": "runc"
	I1208 00:30:46.291930  890932 command_runner.go:130] >     }
	I1208 00:30:46.291939  890932 command_runner.go:130] >   ],
	I1208 00:30:46.291952  890932 command_runner.go:130] >   "status": {
	I1208 00:30:46.291955  890932 command_runner.go:130] >     "conditions": [
	I1208 00:30:46.291959  890932 command_runner.go:130] >       {
	I1208 00:30:46.291962  890932 command_runner.go:130] >         "message": "",
	I1208 00:30:46.291966  890932 command_runner.go:130] >         "reason": "",
	I1208 00:30:46.291973  890932 command_runner.go:130] >         "status": true,
	I1208 00:30:46.291983  890932 command_runner.go:130] >         "type": "RuntimeReady"
	I1208 00:30:46.291990  890932 command_runner.go:130] >       },
	I1208 00:30:46.291993  890932 command_runner.go:130] >       {
	I1208 00:30:46.292000  890932 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1208 00:30:46.292004  890932 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1208 00:30:46.292009  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292013  890932 command_runner.go:130] >         "type": "NetworkReady"
	I1208 00:30:46.292019  890932 command_runner.go:130] >       },
	I1208 00:30:46.292022  890932 command_runner.go:130] >       {
	I1208 00:30:46.292047  890932 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1208 00:30:46.292057  890932 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1208 00:30:46.292063  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292068  890932 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1208 00:30:46.292074  890932 command_runner.go:130] >       }
	I1208 00:30:46.292077  890932 command_runner.go:130] >     ]
	I1208 00:30:46.292080  890932 command_runner.go:130] >   }
	I1208 00:30:46.292083  890932 command_runner.go:130] > }
	I1208 00:30:46.295037  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:46.295064  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:46.295108  890932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:30:46.295135  890932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:30:46.295307  890932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:30:46.295389  890932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:30:46.302776  890932 command_runner.go:130] > kubeadm
	I1208 00:30:46.302853  890932 command_runner.go:130] > kubectl
	I1208 00:30:46.302863  890932 command_runner.go:130] > kubelet
	I1208 00:30:46.303600  890932 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:30:46.303710  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:30:46.311760  890932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:30:46.325760  890932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:30:46.340134  890932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 00:30:46.359100  890932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:30:46.362934  890932 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:30:46.363653  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:46.491856  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:47.343005  890932 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:30:47.343028  890932 certs.go:195] generating shared ca certs ...
	I1208 00:30:47.343054  890932 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:47.343240  890932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:30:47.343312  890932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:30:47.343326  890932 certs.go:257] generating profile certs ...
	I1208 00:30:47.343460  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:30:47.343536  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:30:47.343590  890932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:30:47.343612  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:30:47.343630  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:30:47.343655  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:30:47.343671  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:30:47.343691  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:30:47.343706  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:30:47.343719  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:30:47.343734  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:30:47.343800  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:30:47.343845  890932 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:30:47.343860  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:30:47.343888  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:30:47.343924  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:30:47.343960  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:30:47.344029  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:47.344078  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.344096  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem -> /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.344112  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.344800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:30:47.365934  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:30:47.392004  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:30:47.412283  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:30:47.434592  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:30:47.452176  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:30:47.471245  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:30:47.489925  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:30:47.511686  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:30:47.530800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:30:47.549900  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:30:47.568360  890932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:30:47.581856  890932 ssh_runner.go:195] Run: openssl version
	I1208 00:30:47.588310  890932 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:30:47.588394  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.596457  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:30:47.604012  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607834  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607889  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607941  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.648743  890932 command_runner.go:130] > 3ec20f2e
	I1208 00:30:47.649210  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:30:47.656730  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.664307  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:30:47.671943  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.675995  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676036  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676087  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.716996  890932 command_runner.go:130] > b5213941
	I1208 00:30:47.717090  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:30:47.724719  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.732215  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:30:47.740036  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744030  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744106  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744186  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.784659  890932 command_runner.go:130] > 51391683
	I1208 00:30:47.785207  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:30:47.792679  890932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796767  890932 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796815  890932 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:30:47.796824  890932 command_runner.go:130] > Device: 259,1	Inode: 3390890     Links: 1
	I1208 00:30:47.796831  890932 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:47.796838  890932 command_runner.go:130] > Access: 2025-12-08 00:26:39.668848968 +0000
	I1208 00:30:47.796844  890932 command_runner.go:130] > Modify: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796849  890932 command_runner.go:130] > Change: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796854  890932 command_runner.go:130] >  Birth: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796956  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:30:47.837955  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.838424  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:30:47.879403  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.879847  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:30:47.921180  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.921679  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:30:47.962513  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.963017  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:30:48.007633  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.007748  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:30:48.052514  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.052941  890932 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:48.053033  890932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:30:48.053097  890932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:30:48.081438  890932 cri.go:89] found id: ""
	I1208 00:30:48.081565  890932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:30:48.089271  890932 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:30:48.089305  890932 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:30:48.089313  890932 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:30:48.093391  890932 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:30:48.093432  890932 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:30:48.093495  890932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:30:48.102864  890932 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:30:48.103337  890932 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.103450  890932 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "functional-386544" cluster setting kubeconfig missing "functional-386544" context setting]
	I1208 00:30:48.103819  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.104260  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.104413  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.105009  890932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:30:48.105030  890932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:30:48.105036  890932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:30:48.105041  890932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:30:48.105047  890932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:30:48.105105  890932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:30:48.105315  890932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:30:48.117774  890932 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:30:48.117857  890932 kubeadm.go:602] duration metric: took 24.417752ms to restartPrimaryControlPlane
	I1208 00:30:48.117881  890932 kubeadm.go:403] duration metric: took 64.945899ms to StartCluster
	I1208 00:30:48.117925  890932 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.118025  890932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.118797  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.119107  890932 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 00:30:48.119487  890932 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:30:48.119575  890932 addons.go:70] Setting storage-provisioner=true in profile "functional-386544"
	I1208 00:30:48.119600  890932 addons.go:239] Setting addon storage-provisioner=true in "functional-386544"
	I1208 00:30:48.119601  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:48.119630  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.120591  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.119636  890932 addons.go:70] Setting default-storageclass=true in profile "functional-386544"
	I1208 00:30:48.120910  890932 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-386544"
	I1208 00:30:48.121235  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.122185  890932 out.go:179] * Verifying Kubernetes components...
	I1208 00:30:48.124860  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:48.159125  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.159302  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.159592  890932 addons.go:239] Setting addon default-storageclass=true in "functional-386544"
	I1208 00:30:48.159620  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.160038  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.170516  890932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:30:48.173762  890932 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.173784  890932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:30:48.173857  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.210938  890932 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:48.210964  890932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:30:48.211031  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.228251  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.254642  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.338576  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:48.365732  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.388846  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.094190  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094240  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094289  890932 retry.go:31] will retry after 221.572731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094347  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094353  890932 retry.go:31] will retry after 127.29639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094558  890932 node_ready.go:35] waiting up to 6m0s for node "functional-386544" to be "Ready" ...
	I1208 00:30:49.094733  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.094831  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.222592  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.293397  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.293520  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.293548  890932 retry.go:31] will retry after 191.192714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.316617  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.385398  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.389149  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.389192  890932 retry.go:31] will retry after 221.019406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.485459  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.544915  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.548575  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.548650  890932 retry.go:31] will retry after 430.912171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.594843  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.595415  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.610614  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.669839  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.669884  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.669904  890932 retry.go:31] will retry after 602.088887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.980400  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:50.054076  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.057921  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.057957  890932 retry.go:31] will retry after 1.251170732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.095196  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.095305  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:50.273088  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:50.333799  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.333898  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.333941  890932 retry.go:31] will retry after 841.525831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.595581  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.595651  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.595949  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:51.095803  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.096238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:51.096319  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:51.176619  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:51.234663  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.238362  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.238405  890932 retry.go:31] will retry after 1.674228806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.309626  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:51.370041  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.373759  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.373793  890932 retry.go:31] will retry after 1.825797421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.595251  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.595336  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.595859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.095576  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.095656  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.096001  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.594894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.595585  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.912970  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:52.971340  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:52.975027  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:52.975063  890932 retry.go:31] will retry after 2.158822419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.095343  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.095426  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.095834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:53.200381  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:53.262558  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:53.262597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.262618  890932 retry.go:31] will retry after 2.117348765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.595941  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.596038  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.596315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:53.596377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:54.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.095321  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:54.595354  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.595475  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.097427  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.097684  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.097999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.134417  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:55.207147  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.207186  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.207211  890932 retry.go:31] will retry after 1.888454669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.380583  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:55.442228  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.442305  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.442354  890932 retry.go:31] will retry after 2.144073799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.595860  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.595937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.596276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:56.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.095041  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:56.095552  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:56.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.595189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.094913  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.094995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.096590  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:57.159346  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.159395  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.159419  890932 retry.go:31] will retry after 2.451052222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.586888  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:57.595329  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.595647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.595917  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.644195  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.648428  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.648466  890932 retry.go:31] will retry after 6.27239315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:58.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.095132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:58.595202  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.595277  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.595673  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:58.595737  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:59.095382  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.095474  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.095817  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.595497  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.595641  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.595962  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.611138  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:59.678142  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:59.678192  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:59.678217  890932 retry.go:31] will retry after 3.668002843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:00.095797  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.096216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:00.594886  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.594963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.595392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:01.095660  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.095757  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.096070  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:01.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:01.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.594889  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.595445  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.094968  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.595407  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.346685  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:03.431951  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.432026  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.432051  890932 retry.go:31] will retry after 7.871453146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.595808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.595982  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.596320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:03.596392  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:03.921995  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:03.979614  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.984229  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.984264  890932 retry.go:31] will retry after 6.338984785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:04.095500  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.095579  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.095881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:04.595749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.596230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.094969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.594775  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.595280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:06.094874  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:06.095343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:06.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.594931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.596121  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:07.095769  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.095852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.096129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:07.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.595312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.594744  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.594830  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.595101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:08.595154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:09.094875  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:09.594884  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.594974  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.095326  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.095417  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.095739  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.324305  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:10.384998  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:10.385051  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.385071  890932 retry.go:31] will retry after 7.782157506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.595548  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.595897  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:10.595950  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:11.095753  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.095835  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:11.304608  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:11.367180  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:11.367234  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.367256  890932 retry.go:31] will retry after 13.123466664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.595455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.595807  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.095614  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.095694  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.095989  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.594814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:13.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.094906  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:13.095366  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:13.595620  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.094918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.095230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.594881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.595183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.094943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.095289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.594876  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.595270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:15.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:16.094820  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.094894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.095164  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:16.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.594908  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.595244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.095054  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.095138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.095471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.595816  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.595940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.596241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:17.596293  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:18.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.094955  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:18.168028  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:18.232113  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:18.232150  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.232169  890932 retry.go:31] will retry after 8.094581729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.595690  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.595775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.596183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.095628  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.096011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.594718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.594802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:20.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.095232  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:20.095311  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:20.595598  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.595793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.596357  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.095040  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.594912  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.595011  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.595362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.094750  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.094826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.095087  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.594811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:22.595315  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:23.094999  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.095088  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.095463  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:23.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.595136  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.094856  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.490869  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:24.557459  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:24.557507  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.557527  890932 retry.go:31] will retry after 14.933128441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.595841  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.595922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.596313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:24.596367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:25.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.095113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:25.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.595217  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.094904  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.095360  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.327725  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:26.388171  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:26.388210  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.388230  890932 retry.go:31] will retry after 17.607962094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.595498  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.595632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.595892  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:27.095752  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.095851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.096189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:27.096258  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:27.594738  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.095672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.096073  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.595257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.595836  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.595984  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.596331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:29.596385  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:30.095156  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.095252  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.095627  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:30.595556  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.596442  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.094732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.094808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.095102  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.594886  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.595210  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:32.094828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.095216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:32.095266  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:32.595760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.595841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.596354  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.095264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.594878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:34.094814  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.095244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:34.095287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:34.594940  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.595021  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.595365  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.094942  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.095029  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.595132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:36.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.094904  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.095255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:36.095316  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:36.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.595276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.095623  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.095696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.095973  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.594749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.594850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.595227  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:38.094987  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.095112  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:38.095555  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:38.595474  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.595556  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.595831  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.095726  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.095806  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.096148  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.491741  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:39.568327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:39.568372  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.568394  890932 retry.go:31] will retry after 16.95217324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.595718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.596632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.597031  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:40.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.095785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:40.096128  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:40.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.595175  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.094806  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.094893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.095209  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.595791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.596479  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.094922  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.095018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.095545  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.595373  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.595463  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.596518  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:31:42.596581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:43.095290  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.095363  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.095661  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.595732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.595812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.596157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.996743  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:44.061795  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:44.065597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.065636  890932 retry.go:31] will retry after 36.030777087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.094709  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.095134  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:44.595619  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.595689  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.596188  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:45.095192  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.095284  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:45.095814  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:45.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.595664  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.596700  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:46.095471  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.095564  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.095854  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:46.595665  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.595741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.596605  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:47.095443  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.095832  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:47.095881  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:47.595397  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.595480  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.595753  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.095688  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.095797  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.096203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.594869  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:49.095593  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.095675  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.096008  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:49.096067  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:49.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.594865  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.094833  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.596966  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:51.095757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.095841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:51.096238  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:51.594921  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.595014  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.595361  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.094793  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.095231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.594828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.094823  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.594757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.594827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.595090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:53.595131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:54.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.095337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:54.595028  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.595111  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.595443  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.095154  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.095659  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.595576  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.595659  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.595995  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:55.596040  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:56.094916  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:56.520835  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:56.580569  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580606  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580706  890932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:31:56.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.595785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.596127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.094846  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.594959  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.595375  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:58.095719  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.095802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.096233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:58.096313  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:58.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.095006  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.095098  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.095434  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.594763  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.594848  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.595114  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.095594  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.595570  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.596011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:00.596082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:01.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.096010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.595794  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.596311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:02.596371  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:03.095057  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.095145  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.095500  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:03.595367  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.595442  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.095519  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.095642  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.096000  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.595726  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.595814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.596263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:05.095616  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.095688  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.095960  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:05.096006  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:05.595742  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.595817  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.595654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.596003  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:07.095781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.095861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.096199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:07.096254  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:07.594824  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.594910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.094781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.095147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.595140  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.595213  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.595560  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.095144  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.095234  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.095578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.595126  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.595198  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.595458  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:09.595499  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.095157  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.095251  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.095657  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.595220  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.595297  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.595648  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.095385  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.095455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.095752  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.595492  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.595574  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.595922  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:11.595978  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.095855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.594787  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.595182  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.094907  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.094987  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.095332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.595577  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:13.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:14.095571  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.095649  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.095941  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.595772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.595853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.596231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.094898  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.095334  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.595180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:16.095326  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:16.595008  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.595092  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.595453  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.094788  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.095049  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.595151  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.094845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.595685  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.595803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:18.596225  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:19.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.095319  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.594891  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.594970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.595320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.094805  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.095201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.097611  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:20.173666  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173721  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173816  890932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:32:20.177110  890932 out.go:179] * Enabled addons: 
	I1208 00:32:20.180584  890932 addons.go:530] duration metric: took 1m32.061097112s for enable addons: enabled=[]
	I1208 00:32:20.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.595353  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.095445  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.095926  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:21.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.596006  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.094730  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.094810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.095155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.095654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.095734  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.096034  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.096082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.595243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.094924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.595670  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.595754  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.095811  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.095896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.096308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.096381  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:25.594842  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.095626  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.095702  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.095977  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.595770  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.596206  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.095271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:27.595194  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.095355  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:28.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.595399  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.095084  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.595122  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.595197  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.595539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:29.595597  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:30.095160  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.095253  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.595339  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.595701  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.095621  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.095959  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.595634  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.596065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:31.596120  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:32.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.595231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.094941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.594792  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:34.095348  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:34.595041  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.595122  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.095733  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.096082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.594826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.595179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.595680  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.595807  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.596074  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:36.596131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:37.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.094901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.095188  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.095665  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.594842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.595165  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.095320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:39.095377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:39.594776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.595118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.095374  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.595107  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.595184  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.595524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.094737  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.094813  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.594877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:41.595246  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.095448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.595773  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.595847  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.094911  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.095007  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.095748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.594832  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:32:43.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.095588  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.095673  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.095930  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.595731  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.595809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.094947  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.095058  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.095377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.595628  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.595984  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.095846  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.095977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.096455  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.096521  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:46.595187  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.595275  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.595599  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.095341  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.095628  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.595415  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.595489  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.095646  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.595037  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.595138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.595519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:48.595573  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.094988  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.095539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.595369  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.595785  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.095604  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.095687  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.595195  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.095341  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.595377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.096075  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.096116  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:53.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.095368  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.095450  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.095808  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.595724  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.596055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.094748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.094827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.595264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.095731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.096035  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.594732  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.594809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.094761  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.094840  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.595660  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.595745  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.596013  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:57.596066  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.094869  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.095204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.595193  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.595274  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.595658  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.095380  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.095459  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.095744  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.595525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.595604  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.595972  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.094758  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.095388  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.595177  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.594943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.595298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.094875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.594959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.595287  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.595345  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:03.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.095115  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.595322  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.595402  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.095474  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.095555  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.095896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.595687  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.595771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.596109  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.596164  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:05.095698  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.095821  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.096157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.594941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.595621  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.595983  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.095871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.096215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:07.096271  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.094792  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.094883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.095200  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.594923  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.094834  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.094921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.095276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.595681  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.595759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.596030  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:09.596071  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:10.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.095180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.595171  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.094783  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.095169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.594859  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:12.095394  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:12.595611  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.595686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.595968  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.095748  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.095829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.594964  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.595409  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.095085  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.095492  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:14.095548  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:14.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.094998  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.095079  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.095428  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.595113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.095424  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.595117  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.595199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:16.595612  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.095280  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.095347  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.095662  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.595238  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.595324  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.595678  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.095532  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.095611  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.095982  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.595098  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.094863  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.095323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.095387  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.095680  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.594771  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.094914  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.095000  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.095330  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.594854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.595147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.595196  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.094961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.594844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.594926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.095693  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.095771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.595065  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.595151  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:23.595587  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.095271  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.095360  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.595131  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.595202  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.595547  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.095305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.595099  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.095122  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.095199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.095487  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.095533  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.594948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.095043  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.095128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.595145  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.595211  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.595478  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.095182  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.095261  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.095626  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.095683  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:28.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.595718  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.596082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.096085  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.595201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.095390  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.595110  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.595152  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.094935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.594832  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.594907  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.595282  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.595155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.094850  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.594897  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.594986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.595405  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.595460  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.094996  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.095074  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.095402  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.595458  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.096170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.595656  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.596020  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:36.596064  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.095648  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.095004  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.095492  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.595152  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.595232  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.595511  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.095291  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.594994  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.595449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.095710  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.095987  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.096029  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:41.595774  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.595854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.094945  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.095447  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.594810  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.594880  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.095281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.595149  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.595226  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.595578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:43.595639  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.095775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.096055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.594909  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.595246  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.595704  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.595779  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:45.596188  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.095300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.094796  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.094870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.095149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.594854  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.095000  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.095085  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.095460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.095511  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:48.595390  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.595476  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.595748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.095572  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.095647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.095999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.595785  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.596224  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.095203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.594890  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.595313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:50.595368  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.094876  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.095346  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.595652  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.095805  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.096192  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.594926  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.595378  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:52.595433  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.094786  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.094881  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.094965  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.595582  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.595660  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.595948  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:54.595991  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:55.095773  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.095890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.096222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.095686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.095975  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.595757  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.595833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.596223  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:56.596285  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:57.094855  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.095265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.595601  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.595670  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.595954  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.095735  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.095811  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.096159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.594840  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.594919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.095680  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.095963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:59.096015  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:59.595776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.595860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.596187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.094948  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.095044  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.594807  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.595187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.594909  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.595385  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:01.595446  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:02.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.095145  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.594938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.095022  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.095477  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.595437  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.595711  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:03.595753  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:04.095511  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.095589  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.095964  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.595805  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.595893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.596256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.094892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.095280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.595525  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:06.095367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:06.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.595230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.095222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.594993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.595353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.095670  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.095741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:08.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:08.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.595235  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.094980  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.595609  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.595691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.595986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.594928  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.595018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.595327  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:10.595376  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:11.094812  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.094900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.595130  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.094818  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.094897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:13.095308  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:13.594998  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.595102  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.595450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.095713  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.095782  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.595722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.595804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.094925  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.095362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:15.095419  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:15.595024  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.595091  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.595369  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.094880  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.595018  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.595096  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.595400  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.095070  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.095425  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:17.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:17.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.594950  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.595419  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.094891  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.094971  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.595365  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.595444  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.595738  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.095230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.095306  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.095655  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:19.095709  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.595477  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.595895  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.095714  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.096185  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.595647  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.595727  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.596033  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:21.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:22.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.594975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.595067  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.595475  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.094995  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.095075  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.095444  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.095501  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:24.595780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.595858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.596186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.594923  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.595001  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.595307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:26.595299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.094975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.595139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.095238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.595230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.595311  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.595664  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:28.595719  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:29.095432  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.095508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.095787  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.595520  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.595592  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.595939  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.095902  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.096081  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.096554  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.595239  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.595307  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.595584  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.095490  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.095572  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.095910  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:31.095974  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:31.595725  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.595808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.095639  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.095709  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.595814  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.595902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.596323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.094963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.095317  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.595336  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.595409  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:33.595717  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.095551  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.095629  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.095979  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.595790  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.595304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.094956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.095379  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:36.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.595663  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.595963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.095729  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.095810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.096161  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.594967  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.096015  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.096058  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:38.595114  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.595196  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.595553  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.095363  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.095447  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.095793  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.595427  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.595881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.095722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.095803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.096152  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:40.096206  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:40.594823  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.094864  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.594936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.095093  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.095185  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.095576  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.595787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.596105  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:42.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.094873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.594881  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.595259  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.594969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.595448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.094892  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.095006  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.095381  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.095448  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:45.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.595342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.094959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.095359  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.594941  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.595022  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.094744  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.094819  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.095101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.594892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:47.595343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.095015  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.095449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.595544  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.595623  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.595896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.095113  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.095208  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.095687  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.595013  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.595100  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.595440  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:49.595488  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.095418  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.095709  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.595564  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.595644  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.094776  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.094860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.095248  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.595659  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.596066  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:51.596155  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:52.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.595055  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.595136  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.595494  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.094787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.095068  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.595063  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.595142  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.595460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:54.095354  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:54.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.595721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.596067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.094878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.095629  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.095704  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.096027  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:56.096075  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:56.594762  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.594845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.094921  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.095004  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.595149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.595381  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.595460  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.595812  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:58.595863  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:59.095337  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.095413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.095693  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.595446  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.595531  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.595875  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.095754  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.096197  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.595251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.095251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:01.095307  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:01.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.594896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.595215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.094762  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.094836  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.095122  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.094822  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.094902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.595638  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.595707  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.595996  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:03.596048  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:04.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.095254  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.594969  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.595053  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.595398  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.095017  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.095353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.095490  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:06.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.594852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.595138  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.095278  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.595328  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.595759  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.095454  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.095798  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:08.095845  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:08.594864  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.595176  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.094991  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.095347  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.595069  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.595526  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:10.595586  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:11.095553  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.095640  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.095940  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.094800  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.094884  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.594901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.595178  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.094902  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.094979  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:13.095363  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:13.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.595301  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.595667  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.095197  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.095270  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.095550  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.595257  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.595339  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.595725  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.095585  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.096126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:15.096187  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.594862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.595129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.094866  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.595015  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.094867  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.594835  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.595233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:17.595290  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:18.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.595435  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.595508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.595780  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.096078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.595160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.094736  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.094818  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.095118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:20.095178  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:20.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.095027  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.094877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.095219  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:22.095276  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:22.594927  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.595337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.094807  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.094885  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.594870  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.595296  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:24.095323  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.596024  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.095316  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.594932  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.595268  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.095967  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:26.096009  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.594774  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.595220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.095279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.595172  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.594890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:28.595297  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:29.095621  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.594756  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.594833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.595168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.094972  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.095501  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.594791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.094879  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:31.095357  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:31.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.094855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.594962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.595305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.095132  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.095524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:33.095581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.594888  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.594964  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.595242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.095392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.594946  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.595024  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.595376  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.095094  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.095178  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.095522  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.594940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.595291  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:36.094854  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.095261  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.594784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.594860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.595344  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:38.095615  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.095691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.095993  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.094849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.594893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.595159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.094832  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.094914  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:40.095383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.595050  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.595133  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.095165  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.095247  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.595432  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.595533  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.595908  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.095710  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.096304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.096383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.594857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.595136  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.595212  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.595549  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.095717  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.095796  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.096072  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.595340  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.595058  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.595128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.595471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.095186  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.095266  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.595402  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.595481  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.595824  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.595879  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.595618  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.595696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.095799  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.096202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.595337  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.595413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.595706  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.095444  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.095524  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.095902  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.095961  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.595540  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.595625  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.595976  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.095709  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.096095  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.594799  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.094959  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.095064  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.095433  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.594801  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.595287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.094975  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.095331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.595042  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.595124  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.595480  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.095139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.594836  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.595282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.595384  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.094872  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.095335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.595658  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.596021  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.094747  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.094842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.095674  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.096062  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.096108  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.594963  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.595039  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.595371  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.094934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.594883  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.594996  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.595394  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.095103  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.095186  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.095515  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.596169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.596227  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:59.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.096039  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.594804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.595133  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.094929  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.095015  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.095342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.595183  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.595265  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.595623  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.095438  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.095859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:01.095916  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.595708  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.596080  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.594816  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.595265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.594768  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:03.595263  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:04.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.594813  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.594897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.095720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.595766  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.596299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:06.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.095304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.595639  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.595720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.596054  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.094760  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.094857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.594898  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.594972  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.595325  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.095635  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.095713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.095986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:08.096028  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:08.595142  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.595227  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.595555  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.095287  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.095364  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.095690  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.595392  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.595461  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.095517  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.595700  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.595784  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:10.596216  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:11.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.094850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.595266  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.094980  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.095061  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.095386  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.595126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.094912  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:13.095347  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:13.595000  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.595104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.595437  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.095097  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.095172  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.095450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.595177  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.595281  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.595679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.095515  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.095616  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:15.096068  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:15.595565  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.595677  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.595994  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.094724  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.094815  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.095174  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.594934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.594829  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:17.595272  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:18.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.594793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.595065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.094767  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.595263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.595322  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:20.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.096099  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.594887  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.094954  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.095363  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.595677  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.595750  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.596077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.596147  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:22.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.094938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.095256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.095643  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.095723  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.096019  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.595048  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.595147  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.595567  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.095396  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.095478  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:24.095979  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:24.595457  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.595529  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.095586  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.095668  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.595744  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.595838  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.596274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.095659  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.096092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:26.096144  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:26.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.094928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.095252  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.594876  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.595170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.595430  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.595768  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:28.595815  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:29.095242  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.095310  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.095629  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.595200  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.595280  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.595637  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.095238  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.095745  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.595479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.595834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.595883  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:31.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.095759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.096119  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.594834  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.594916  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.595238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.094743  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.094812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.095077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.595202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.094957  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.095413  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:33.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.595484  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.595560  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.595823  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.095676  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.095765  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.096127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.094778  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.594899  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.594983  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.595332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.595390  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:36.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.095205  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.095569  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.595347  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.595414  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.095479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.095557  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.095923  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.596092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.596146  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:38.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.095685  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.095965  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.595156  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.595538  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.095256  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.095679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.595429  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.595772  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.095636  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.095721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.096088  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:40.096154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.594798  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.094802  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.095218  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.594911  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.594990  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.094885  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.094978  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.095379  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.595088  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.595162  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.595436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.595481  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:43.094831  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.095253  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.595271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.095602  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.095672  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.595789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.595878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.596229  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.596286  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:45.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.095095  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.095519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.594961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.595315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.594993  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.595451  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.094789  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.095057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:47.095099  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.594746  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.595163  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.095765  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.096257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.594789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.094763  890932 type.go:168] "Request Body" body=""
	I1208 00:36:49.094842  890932 node_ready.go:38] duration metric: took 6m0.000209264s for node "functional-386544" to be "Ready" ...
	I1208 00:36:49.097838  890932 out.go:203] 
	W1208 00:36:49.100712  890932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:36:49.100735  890932 out.go:285] * 
	* 
	W1208 00:36:49.102896  890932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:36:49.105576  890932 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-386544 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.585100224s for "functional-386544" cluster.
I1208 00:36:49.639684  846711 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (346.803047ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-932121 ssh sudo cat /usr/share/ca-certificates/8467112.pem                                                                                           │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image save kicbase/echo-server:functional-932121 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image rm kicbase/echo-server:functional-932121 --alsologtostderr                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image save --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format short --alsologtostderr                                                                                                     │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format yaml --alsologtostderr                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format json --alsologtostderr                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format table --alsologtostderr                                                                                                     │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh pgrep buildkitd                                                                                                                           │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image          │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                          │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete         │ -p functional-932121                                                                                                                                            │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start          │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ start          │ -p functional-386544 --alsologtostderr -v=8                                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:30 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:30:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:30:43.106195  890932 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:30:43.106412  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106440  890932 out.go:374] Setting ErrFile to fd 2...
	I1208 00:30:43.106489  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106802  890932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:30:43.107327  890932 out.go:368] Setting JSON to false
	I1208 00:30:43.108252  890932 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18796,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:30:43.108353  890932 start.go:143] virtualization:  
	I1208 00:30:43.111927  890932 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:30:43.114895  890932 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:30:43.114974  890932 notify.go:221] Checking for updates...
	I1208 00:30:43.121042  890932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:30:43.124118  890932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:43.127146  890932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:30:43.130017  890932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:30:43.132953  890932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:30:43.136385  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:43.136518  890932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:30:43.171722  890932 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:30:43.171844  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.232988  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.222800102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.233101  890932 docker.go:319] overlay module found
	I1208 00:30:43.236209  890932 out.go:179] * Using the docker driver based on existing profile
	I1208 00:30:43.239024  890932 start.go:309] selected driver: docker
	I1208 00:30:43.239046  890932 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.240193  890932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:30:43.240306  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.299458  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.288388391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.299888  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:43.299955  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:43.300012  890932 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.303163  890932 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:30:43.305985  890932 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:30:43.309025  890932 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:30:43.312042  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:43.312102  890932 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:30:43.312113  890932 cache.go:65] Caching tarball of preloaded images
	I1208 00:30:43.312160  890932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:30:43.312254  890932 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:30:43.312266  890932 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:30:43.312379  890932 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:30:43.332475  890932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:30:43.332500  890932 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:30:43.332516  890932 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:30:43.332550  890932 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:30:43.332614  890932 start.go:364] duration metric: took 40.517µs to acquireMachinesLock for "functional-386544"
	I1208 00:30:43.332637  890932 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:30:43.332643  890932 fix.go:54] fixHost starting: 
	I1208 00:30:43.332918  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:43.364362  890932 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:30:43.364391  890932 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:30:43.367522  890932 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:30:43.367561  890932 machine.go:94] provisionDockerMachine start ...
	I1208 00:30:43.367667  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.390594  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.390943  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.390953  890932 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:30:43.546039  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.546064  890932 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:30:43.546132  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.563909  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.564221  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.564240  890932 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:30:43.728055  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.728136  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.746428  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.746778  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.746805  890932 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:30:43.898980  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:30:43.899007  890932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:30:43.899068  890932 ubuntu.go:190] setting up certificates
	I1208 00:30:43.899078  890932 provision.go:84] configureAuth start
	I1208 00:30:43.899155  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:43.917225  890932 provision.go:143] copyHostCerts
	I1208 00:30:43.917271  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917317  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:30:43.917335  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917414  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:30:43.917515  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917537  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:30:43.917547  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917575  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:30:43.917632  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917656  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:30:43.917664  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917691  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:30:43.917796  890932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:30:44.201729  890932 provision.go:177] copyRemoteCerts
	I1208 00:30:44.201799  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:30:44.201847  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.218852  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.326622  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:30:44.326687  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:30:44.345138  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:30:44.345250  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:30:44.363475  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:30:44.363575  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:30:44.382571  890932 provision.go:87] duration metric: took 483.468304ms to configureAuth
	I1208 00:30:44.382643  890932 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:30:44.382843  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:44.382857  890932 machine.go:97] duration metric: took 1.015288541s to provisionDockerMachine
	I1208 00:30:44.382865  890932 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:30:44.382880  890932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:30:44.382939  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:30:44.382987  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.401380  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.506846  890932 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:30:44.510586  890932 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:30:44.510612  890932 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:30:44.510623  890932 command_runner.go:130] > VERSION_ID="12"
	I1208 00:30:44.510628  890932 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:30:44.510633  890932 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:30:44.510637  890932 command_runner.go:130] > ID=debian
	I1208 00:30:44.510641  890932 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:30:44.510646  890932 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:30:44.510652  890932 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:30:44.510734  890932 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:30:44.510755  890932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:30:44.510768  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:30:44.510833  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:30:44.510921  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:30:44.510932  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /etc/ssl/certs/8467112.pem
	I1208 00:30:44.511028  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:30:44.511037  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> /etc/test/nested/copy/846711/hosts
	I1208 00:30:44.511082  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:30:44.518977  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:44.538494  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:30:44.556928  890932 start.go:296] duration metric: took 174.046033ms for postStartSetup
	I1208 00:30:44.557012  890932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:30:44.557057  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.579278  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.683552  890932 command_runner.go:130] > 11%
	I1208 00:30:44.683622  890932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:30:44.688016  890932 command_runner.go:130] > 174G
	I1208 00:30:44.688056  890932 fix.go:56] duration metric: took 1.355411206s for fixHost
	I1208 00:30:44.688067  890932 start.go:83] releasing machines lock for "functional-386544", held for 1.355443108s
	I1208 00:30:44.688146  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:44.705277  890932 ssh_runner.go:195] Run: cat /version.json
	I1208 00:30:44.705345  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.705617  890932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:30:44.705687  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.723084  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.728238  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.826153  890932 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:30:44.826300  890932 ssh_runner.go:195] Run: systemctl --version
	I1208 00:30:44.917784  890932 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:30:44.920412  890932 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:30:44.920484  890932 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:30:44.920574  890932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:30:44.924900  890932 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:30:44.925095  890932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:30:44.925215  890932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:30:44.933474  890932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:30:44.933497  890932 start.go:496] detecting cgroup driver to use...
	I1208 00:30:44.933530  890932 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:30:44.933580  890932 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:30:44.950010  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:30:44.963687  890932 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:30:44.963783  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:30:44.980391  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:30:44.994304  890932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:30:45.255981  890932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:30:45.407305  890932 docker.go:234] disabling docker service ...
	I1208 00:30:45.407423  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:30:45.423468  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:30:45.437222  890932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:30:45.561603  890932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:30:45.705878  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:30:45.719726  890932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:30:45.733506  890932 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1208 00:30:45.735147  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:30:45.744694  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:30:45.753960  890932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:30:45.754081  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:30:45.763511  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.772723  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:30:45.781584  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.790600  890932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:30:45.799135  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:30:45.808317  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:30:45.817244  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:30:45.826211  890932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:30:45.833037  890932 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:30:45.834008  890932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:30:45.841603  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:45.965344  890932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:30:46.100261  890932 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:30:46.100385  890932 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:30:46.104210  890932 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1208 00:30:46.104295  890932 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:30:46.104358  890932 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1208 00:30:46.104385  890932 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:46.104410  890932 command_runner.go:130] > Access: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104446  890932 command_runner.go:130] > Modify: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104470  890932 command_runner.go:130] > Change: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104490  890932 command_runner.go:130] >  Birth: -
	I1208 00:30:46.104859  890932 start.go:564] Will wait 60s for crictl version
	I1208 00:30:46.104961  890932 ssh_runner.go:195] Run: which crictl
	I1208 00:30:46.108543  890932 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:30:46.108924  890932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:30:46.136367  890932 command_runner.go:130] > Version:  0.1.0
	I1208 00:30:46.136449  890932 command_runner.go:130] > RuntimeName:  containerd
	I1208 00:30:46.136470  890932 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1208 00:30:46.136491  890932 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:30:46.136542  890932 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:30:46.136636  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.156742  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.159302  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.181269  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.189080  890932 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:30:46.192076  890932 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:30:46.209081  890932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:30:46.212923  890932 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:30:46.213097  890932 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:30:46.213209  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:46.213289  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.236482  890932 command_runner.go:130] > {
	I1208 00:30:46.236506  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.236511  890932 command_runner.go:130] >     {
	I1208 00:30:46.236520  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.236526  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236531  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.236534  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236538  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236551  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.236558  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236563  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.236571  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236576  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236586  890932 command_runner.go:130] >     },
	I1208 00:30:46.236590  890932 command_runner.go:130] >     {
	I1208 00:30:46.236601  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.236605  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236610  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.236617  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236622  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236632  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.236641  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236646  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.236649  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236654  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236664  890932 command_runner.go:130] >     },
	I1208 00:30:46.236668  890932 command_runner.go:130] >     {
	I1208 00:30:46.236675  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.236679  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236687  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.236690  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236699  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236718  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.236722  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236728  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.236733  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.236740  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236743  890932 command_runner.go:130] >     },
	I1208 00:30:46.236746  890932 command_runner.go:130] >     {
	I1208 00:30:46.236753  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.236760  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236766  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.236769  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236773  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236781  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.236788  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236792  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.236803  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236808  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236814  890932 command_runner.go:130] >       },
	I1208 00:30:46.236821  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236825  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236828  890932 command_runner.go:130] >     },
	I1208 00:30:46.236832  890932 command_runner.go:130] >     {
	I1208 00:30:46.236841  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.236847  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236853  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.236856  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236860  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236868  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.236874  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236879  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.236885  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236894  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236901  890932 command_runner.go:130] >       },
	I1208 00:30:46.236908  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236912  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236916  890932 command_runner.go:130] >     },
	I1208 00:30:46.236926  890932 command_runner.go:130] >     {
	I1208 00:30:46.236933  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.236937  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236943  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.236947  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236951  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236962  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.236968  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236972  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.236976  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236980  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236989  890932 command_runner.go:130] >       },
	I1208 00:30:46.236993  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236997  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237002  890932 command_runner.go:130] >     },
	I1208 00:30:46.237005  890932 command_runner.go:130] >     {
	I1208 00:30:46.237012  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.237017  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237024  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.237027  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237032  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237040  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.237047  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237055  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.237059  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237063  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237066  890932 command_runner.go:130] >     },
	I1208 00:30:46.237076  890932 command_runner.go:130] >     {
	I1208 00:30:46.237084  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.237095  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237104  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.237107  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237112  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237120  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.237126  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237131  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.237134  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237139  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.237142  890932 command_runner.go:130] >       },
	I1208 00:30:46.237146  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237153  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237157  890932 command_runner.go:130] >     },
	I1208 00:30:46.237166  890932 command_runner.go:130] >     {
	I1208 00:30:46.237173  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.237178  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237182  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.237189  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237193  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237201  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.237206  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237210  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.237216  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237221  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.237227  890932 command_runner.go:130] >       },
	I1208 00:30:46.237231  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237235  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.237238  890932 command_runner.go:130] >     }
	I1208 00:30:46.237241  890932 command_runner.go:130] >   ]
	I1208 00:30:46.237244  890932 command_runner.go:130] > }
	I1208 00:30:46.239834  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.239857  890932 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:30:46.239919  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.262227  890932 command_runner.go:130] > {
	I1208 00:30:46.262250  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.262255  890932 command_runner.go:130] >     {
	I1208 00:30:46.262265  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.262280  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262286  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.262289  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262293  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262303  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.262310  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262315  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.262319  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262323  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262326  890932 command_runner.go:130] >     },
	I1208 00:30:46.262330  890932 command_runner.go:130] >     {
	I1208 00:30:46.262348  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.262357  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262363  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.262366  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262370  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262381  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.262386  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262392  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.262396  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262400  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262403  890932 command_runner.go:130] >     },
	I1208 00:30:46.262406  890932 command_runner.go:130] >     {
	I1208 00:30:46.262413  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.262427  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262439  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.262476  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262489  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262498  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.262502  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262506  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.262513  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.262517  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262524  890932 command_runner.go:130] >     },
	I1208 00:30:46.262531  890932 command_runner.go:130] >     {
	I1208 00:30:46.262539  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.262542  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262548  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.262553  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262557  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262565  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.262568  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262572  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.262579  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262583  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262588  890932 command_runner.go:130] >       },
	I1208 00:30:46.262592  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262605  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262609  890932 command_runner.go:130] >     },
	I1208 00:30:46.262612  890932 command_runner.go:130] >     {
	I1208 00:30:46.262619  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.262625  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262631  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.262634  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262638  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262646  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.262649  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262654  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.262660  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262678  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262686  890932 command_runner.go:130] >       },
	I1208 00:30:46.262690  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262694  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262697  890932 command_runner.go:130] >     },
	I1208 00:30:46.262701  890932 command_runner.go:130] >     {
	I1208 00:30:46.262707  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.262718  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262724  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.262727  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262731  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262739  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.262745  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262749  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.262755  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262759  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262772  890932 command_runner.go:130] >       },
	I1208 00:30:46.262776  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262780  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262783  890932 command_runner.go:130] >     },
	I1208 00:30:46.262786  890932 command_runner.go:130] >     {
	I1208 00:30:46.262793  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.262800  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262805  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.262809  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262812  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262819  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.262823  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262827  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.262834  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262838  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262844  890932 command_runner.go:130] >     },
	I1208 00:30:46.262848  890932 command_runner.go:130] >     {
	I1208 00:30:46.262857  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.262867  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262876  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.262882  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262886  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262893  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.262907  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262915  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.262919  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262922  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262929  890932 command_runner.go:130] >       },
	I1208 00:30:46.262933  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262943  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262947  890932 command_runner.go:130] >     },
	I1208 00:30:46.262950  890932 command_runner.go:130] >     {
	I1208 00:30:46.262957  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.262963  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262968  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.262971  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262975  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262982  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.262985  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262990  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.262996  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.263000  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.263013  890932 command_runner.go:130] >       },
	I1208 00:30:46.263017  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.263021  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.263024  890932 command_runner.go:130] >     }
	I1208 00:30:46.263027  890932 command_runner.go:130] >   ]
	I1208 00:30:46.263031  890932 command_runner.go:130] > }
	I1208 00:30:46.265493  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.265517  890932 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:30:46.265524  890932 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:30:46.265625  890932 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:30:46.265699  890932 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:30:46.291229  890932 command_runner.go:130] > {
	I1208 00:30:46.291250  890932 command_runner.go:130] >   "cniconfig": {
	I1208 00:30:46.291256  890932 command_runner.go:130] >     "Networks": [
	I1208 00:30:46.291260  890932 command_runner.go:130] >       {
	I1208 00:30:46.291266  890932 command_runner.go:130] >         "Config": {
	I1208 00:30:46.291271  890932 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1208 00:30:46.291283  890932 command_runner.go:130] >           "Name": "cni-loopback",
	I1208 00:30:46.291288  890932 command_runner.go:130] >           "Plugins": [
	I1208 00:30:46.291292  890932 command_runner.go:130] >             {
	I1208 00:30:46.291297  890932 command_runner.go:130] >               "Network": {
	I1208 00:30:46.291301  890932 command_runner.go:130] >                 "ipam": {},
	I1208 00:30:46.291307  890932 command_runner.go:130] >                 "type": "loopback"
	I1208 00:30:46.291311  890932 command_runner.go:130] >               },
	I1208 00:30:46.291322  890932 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1208 00:30:46.291326  890932 command_runner.go:130] >             }
	I1208 00:30:46.291334  890932 command_runner.go:130] >           ],
	I1208 00:30:46.291344  890932 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1208 00:30:46.291348  890932 command_runner.go:130] >         },
	I1208 00:30:46.291356  890932 command_runner.go:130] >         "IFName": "lo"
	I1208 00:30:46.291362  890932 command_runner.go:130] >       }
	I1208 00:30:46.291366  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291371  890932 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1208 00:30:46.291375  890932 command_runner.go:130] >     "PluginDirs": [
	I1208 00:30:46.291379  890932 command_runner.go:130] >       "/opt/cni/bin"
	I1208 00:30:46.291390  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291395  890932 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1208 00:30:46.291398  890932 command_runner.go:130] >     "Prefix": "eth"
	I1208 00:30:46.291402  890932 command_runner.go:130] >   },
	I1208 00:30:46.291411  890932 command_runner.go:130] >   "config": {
	I1208 00:30:46.291415  890932 command_runner.go:130] >     "cdiSpecDirs": [
	I1208 00:30:46.291419  890932 command_runner.go:130] >       "/etc/cdi",
	I1208 00:30:46.291427  890932 command_runner.go:130] >       "/var/run/cdi"
	I1208 00:30:46.291432  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291436  890932 command_runner.go:130] >     "cni": {
	I1208 00:30:46.291448  890932 command_runner.go:130] >       "binDir": "",
	I1208 00:30:46.291453  890932 command_runner.go:130] >       "binDirs": [
	I1208 00:30:46.291457  890932 command_runner.go:130] >         "/opt/cni/bin"
	I1208 00:30:46.291460  890932 command_runner.go:130] >       ],
	I1208 00:30:46.291464  890932 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1208 00:30:46.291468  890932 command_runner.go:130] >       "confTemplate": "",
	I1208 00:30:46.291472  890932 command_runner.go:130] >       "ipPref": "",
	I1208 00:30:46.291475  890932 command_runner.go:130] >       "maxConfNum": 1,
	I1208 00:30:46.291479  890932 command_runner.go:130] >       "setupSerially": false,
	I1208 00:30:46.291483  890932 command_runner.go:130] >       "useInternalLoopback": false
	I1208 00:30:46.291487  890932 command_runner.go:130] >     },
	I1208 00:30:46.291492  890932 command_runner.go:130] >     "containerd": {
	I1208 00:30:46.291499  890932 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1208 00:30:46.291504  890932 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1208 00:30:46.291509  890932 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1208 00:30:46.291515  890932 command_runner.go:130] >       "runtimes": {
	I1208 00:30:46.291519  890932 command_runner.go:130] >         "runc": {
	I1208 00:30:46.291527  890932 command_runner.go:130] >           "ContainerAnnotations": null,
	I1208 00:30:46.291533  890932 command_runner.go:130] >           "PodAnnotations": null,
	I1208 00:30:46.291545  890932 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1208 00:30:46.291550  890932 command_runner.go:130] >           "cgroupWritable": false,
	I1208 00:30:46.291554  890932 command_runner.go:130] >           "cniConfDir": "",
	I1208 00:30:46.291558  890932 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1208 00:30:46.291564  890932 command_runner.go:130] >           "io_type": "",
	I1208 00:30:46.291568  890932 command_runner.go:130] >           "options": {
	I1208 00:30:46.291576  890932 command_runner.go:130] >             "BinaryName": "",
	I1208 00:30:46.291580  890932 command_runner.go:130] >             "CriuImagePath": "",
	I1208 00:30:46.291588  890932 command_runner.go:130] >             "CriuWorkPath": "",
	I1208 00:30:46.291593  890932 command_runner.go:130] >             "IoGid": 0,
	I1208 00:30:46.291599  890932 command_runner.go:130] >             "IoUid": 0,
	I1208 00:30:46.291604  890932 command_runner.go:130] >             "NoNewKeyring": false,
	I1208 00:30:46.291615  890932 command_runner.go:130] >             "Root": "",
	I1208 00:30:46.291619  890932 command_runner.go:130] >             "ShimCgroup": "",
	I1208 00:30:46.291624  890932 command_runner.go:130] >             "SystemdCgroup": false
	I1208 00:30:46.291627  890932 command_runner.go:130] >           },
	I1208 00:30:46.291641  890932 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1208 00:30:46.291648  890932 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1208 00:30:46.291655  890932 command_runner.go:130] >           "runtimePath": "",
	I1208 00:30:46.291660  890932 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1208 00:30:46.291664  890932 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1208 00:30:46.291668  890932 command_runner.go:130] >           "snapshotter": ""
	I1208 00:30:46.291672  890932 command_runner.go:130] >         }
	I1208 00:30:46.291675  890932 command_runner.go:130] >       }
	I1208 00:30:46.291678  890932 command_runner.go:130] >     },
	I1208 00:30:46.291689  890932 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1208 00:30:46.291698  890932 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1208 00:30:46.291705  890932 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1208 00:30:46.291709  890932 command_runner.go:130] >     "disableApparmor": false,
	I1208 00:30:46.291714  890932 command_runner.go:130] >     "disableHugetlbController": true,
	I1208 00:30:46.291721  890932 command_runner.go:130] >     "disableProcMount": false,
	I1208 00:30:46.291726  890932 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1208 00:30:46.291730  890932 command_runner.go:130] >     "enableCDI": true,
	I1208 00:30:46.291740  890932 command_runner.go:130] >     "enableSelinux": false,
	I1208 00:30:46.291745  890932 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1208 00:30:46.291749  890932 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1208 00:30:46.291753  890932 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1208 00:30:46.291758  890932 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1208 00:30:46.291763  890932 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1208 00:30:46.291770  890932 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1208 00:30:46.291775  890932 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1208 00:30:46.291789  890932 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291798  890932 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1208 00:30:46.291803  890932 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291810  890932 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1208 00:30:46.291819  890932 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1208 00:30:46.291823  890932 command_runner.go:130] >   },
	I1208 00:30:46.291827  890932 command_runner.go:130] >   "features": {
	I1208 00:30:46.291831  890932 command_runner.go:130] >     "supplemental_groups_policy": true
	I1208 00:30:46.291835  890932 command_runner.go:130] >   },
	I1208 00:30:46.291839  890932 command_runner.go:130] >   "golang": "go1.24.9",
	I1208 00:30:46.291850  890932 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291862  890932 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291866  890932 command_runner.go:130] >   "runtimeHandlers": [
	I1208 00:30:46.291870  890932 command_runner.go:130] >     {
	I1208 00:30:46.291874  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291886  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291890  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291893  890932 command_runner.go:130] >       }
	I1208 00:30:46.291897  890932 command_runner.go:130] >     },
	I1208 00:30:46.291907  890932 command_runner.go:130] >     {
	I1208 00:30:46.291911  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291916  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291919  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291922  890932 command_runner.go:130] >       },
	I1208 00:30:46.291926  890932 command_runner.go:130] >       "name": "runc"
	I1208 00:30:46.291930  890932 command_runner.go:130] >     }
	I1208 00:30:46.291939  890932 command_runner.go:130] >   ],
	I1208 00:30:46.291952  890932 command_runner.go:130] >   "status": {
	I1208 00:30:46.291955  890932 command_runner.go:130] >     "conditions": [
	I1208 00:30:46.291959  890932 command_runner.go:130] >       {
	I1208 00:30:46.291962  890932 command_runner.go:130] >         "message": "",
	I1208 00:30:46.291966  890932 command_runner.go:130] >         "reason": "",
	I1208 00:30:46.291973  890932 command_runner.go:130] >         "status": true,
	I1208 00:30:46.291983  890932 command_runner.go:130] >         "type": "RuntimeReady"
	I1208 00:30:46.291990  890932 command_runner.go:130] >       },
	I1208 00:30:46.291993  890932 command_runner.go:130] >       {
	I1208 00:30:46.292000  890932 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1208 00:30:46.292004  890932 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1208 00:30:46.292009  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292013  890932 command_runner.go:130] >         "type": "NetworkReady"
	I1208 00:30:46.292019  890932 command_runner.go:130] >       },
	I1208 00:30:46.292022  890932 command_runner.go:130] >       {
	I1208 00:30:46.292047  890932 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1208 00:30:46.292057  890932 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1208 00:30:46.292063  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292068  890932 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1208 00:30:46.292074  890932 command_runner.go:130] >       }
	I1208 00:30:46.292077  890932 command_runner.go:130] >     ]
	I1208 00:30:46.292080  890932 command_runner.go:130] >   }
	I1208 00:30:46.292083  890932 command_runner.go:130] > }
	I1208 00:30:46.295037  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:46.295064  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:46.295108  890932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:30:46.295135  890932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:30:46.295307  890932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:30:46.295389  890932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:30:46.302776  890932 command_runner.go:130] > kubeadm
	I1208 00:30:46.302853  890932 command_runner.go:130] > kubectl
	I1208 00:30:46.302863  890932 command_runner.go:130] > kubelet
	I1208 00:30:46.303600  890932 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:30:46.303710  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:30:46.311760  890932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:30:46.325760  890932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:30:46.340134  890932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 00:30:46.359100  890932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:30:46.362934  890932 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:30:46.363653  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:46.491856  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:47.343005  890932 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:30:47.343028  890932 certs.go:195] generating shared ca certs ...
	I1208 00:30:47.343054  890932 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:47.343240  890932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:30:47.343312  890932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:30:47.343326  890932 certs.go:257] generating profile certs ...
	I1208 00:30:47.343460  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:30:47.343536  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:30:47.343590  890932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:30:47.343612  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:30:47.343630  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:30:47.343655  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:30:47.343671  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:30:47.343691  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:30:47.343706  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:30:47.343719  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:30:47.343734  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:30:47.343800  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:30:47.343845  890932 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:30:47.343860  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:30:47.343888  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:30:47.343924  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:30:47.343960  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:30:47.344029  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:47.344078  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.344096  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem -> /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.344112  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.344800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:30:47.365934  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:30:47.392004  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:30:47.412283  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:30:47.434592  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:30:47.452176  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:30:47.471245  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:30:47.489925  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:30:47.511686  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:30:47.530800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:30:47.549900  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:30:47.568360  890932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:30:47.581856  890932 ssh_runner.go:195] Run: openssl version
	I1208 00:30:47.588310  890932 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:30:47.588394  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.596457  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:30:47.604012  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607834  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607889  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607941  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.648743  890932 command_runner.go:130] > 3ec20f2e
	I1208 00:30:47.649210  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:30:47.656730  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.664307  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:30:47.671943  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.675995  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676036  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676087  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.716996  890932 command_runner.go:130] > b5213941
	I1208 00:30:47.717090  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:30:47.724719  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.732215  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:30:47.740036  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744030  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744106  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744186  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.784659  890932 command_runner.go:130] > 51391683
	I1208 00:30:47.785207  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:30:47.792679  890932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796767  890932 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796815  890932 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:30:47.796824  890932 command_runner.go:130] > Device: 259,1	Inode: 3390890     Links: 1
	I1208 00:30:47.796831  890932 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:47.796838  890932 command_runner.go:130] > Access: 2025-12-08 00:26:39.668848968 +0000
	I1208 00:30:47.796844  890932 command_runner.go:130] > Modify: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796849  890932 command_runner.go:130] > Change: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796854  890932 command_runner.go:130] >  Birth: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796956  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:30:47.837955  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.838424  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:30:47.879403  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.879847  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:30:47.921180  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.921679  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:30:47.962513  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.963017  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:30:48.007633  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.007748  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:30:48.052514  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.052941  890932 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:48.053033  890932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:30:48.053097  890932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:30:48.081438  890932 cri.go:89] found id: ""
	I1208 00:30:48.081565  890932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:30:48.089271  890932 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:30:48.089305  890932 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:30:48.089313  890932 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:30:48.093391  890932 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:30:48.093432  890932 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:30:48.093495  890932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:30:48.102864  890932 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:30:48.103337  890932 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.103450  890932 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "functional-386544" cluster setting kubeconfig missing "functional-386544" context setting]
	I1208 00:30:48.103819  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.104260  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.104413  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.105009  890932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:30:48.105030  890932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:30:48.105036  890932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:30:48.105041  890932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:30:48.105047  890932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:30:48.105105  890932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:30:48.105315  890932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:30:48.117774  890932 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:30:48.117857  890932 kubeadm.go:602] duration metric: took 24.417752ms to restartPrimaryControlPlane
	I1208 00:30:48.117881  890932 kubeadm.go:403] duration metric: took 64.945899ms to StartCluster
	I1208 00:30:48.117925  890932 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.118025  890932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.118797  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.119107  890932 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 00:30:48.119487  890932 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:30:48.119575  890932 addons.go:70] Setting storage-provisioner=true in profile "functional-386544"
	I1208 00:30:48.119600  890932 addons.go:239] Setting addon storage-provisioner=true in "functional-386544"
	I1208 00:30:48.119601  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:48.119630  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.120591  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.119636  890932 addons.go:70] Setting default-storageclass=true in profile "functional-386544"
	I1208 00:30:48.120910  890932 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-386544"
	I1208 00:30:48.121235  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.122185  890932 out.go:179] * Verifying Kubernetes components...
	I1208 00:30:48.124860  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:48.159125  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.159302  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.159592  890932 addons.go:239] Setting addon default-storageclass=true in "functional-386544"
	I1208 00:30:48.159620  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.160038  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.170516  890932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:30:48.173762  890932 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.173784  890932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:30:48.173857  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.210938  890932 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:48.210964  890932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:30:48.211031  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.228251  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.254642  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.338576  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:48.365732  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.388846  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.094190  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094240  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094289  890932 retry.go:31] will retry after 221.572731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094347  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094353  890932 retry.go:31] will retry after 127.29639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094558  890932 node_ready.go:35] waiting up to 6m0s for node "functional-386544" to be "Ready" ...
	I1208 00:30:49.094733  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.094831  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.222592  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.293397  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.293520  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.293548  890932 retry.go:31] will retry after 191.192714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.316617  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.385398  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.389149  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.389192  890932 retry.go:31] will retry after 221.019406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.485459  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.544915  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.548575  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.548650  890932 retry.go:31] will retry after 430.912171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.594843  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.595415  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.610614  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.669839  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.669884  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.669904  890932 retry.go:31] will retry after 602.088887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.980400  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:50.054076  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.057921  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.057957  890932 retry.go:31] will retry after 1.251170732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.095196  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.095305  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:50.273088  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:50.333799  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.333898  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.333941  890932 retry.go:31] will retry after 841.525831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.595581  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.595651  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.595949  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:51.095803  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.096238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:51.096319  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:51.176619  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:51.234663  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.238362  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.238405  890932 retry.go:31] will retry after 1.674228806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.309626  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:51.370041  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.373759  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.373793  890932 retry.go:31] will retry after 1.825797421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.595251  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.595336  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.595859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.095576  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.095656  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.096001  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.594894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.595585  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.912970  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:52.971340  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:52.975027  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:52.975063  890932 retry.go:31] will retry after 2.158822419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.095343  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.095426  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.095834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:53.200381  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:53.262558  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:53.262597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.262618  890932 retry.go:31] will retry after 2.117348765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.595941  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.596038  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.596315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:53.596377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:54.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.095321  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:54.595354  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.595475  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.097427  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.097684  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.097999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.134417  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:55.207147  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.207186  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.207211  890932 retry.go:31] will retry after 1.888454669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.380583  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:55.442228  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.442305  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.442354  890932 retry.go:31] will retry after 2.144073799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.595860  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.595937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.596276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:56.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.095041  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:56.095552  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:56.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.595189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.094913  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.094995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.096590  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:57.159346  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.159395  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.159419  890932 retry.go:31] will retry after 2.451052222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.586888  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:57.595329  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.595647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.595917  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.644195  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.648428  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.648466  890932 retry.go:31] will retry after 6.27239315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:58.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.095132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:58.595202  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.595277  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.595673  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:58.595737  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:59.095382  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.095474  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.095817  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.595497  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.595641  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.595962  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.611138  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:59.678142  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:59.678192  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:59.678217  890932 retry.go:31] will retry after 3.668002843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:00.095797  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.096216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:00.594886  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.594963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.595392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:01.095660  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.095757  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.096070  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:01.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:01.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.594889  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.595445  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.094968  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.595407  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.346685  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:03.431951  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.432026  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.432051  890932 retry.go:31] will retry after 7.871453146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.595808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.595982  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.596320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:03.596392  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:03.921995  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:03.979614  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.984229  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.984264  890932 retry.go:31] will retry after 6.338984785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:04.095500  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.095579  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.095881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:04.595749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.596230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.094969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.594775  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.595280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:06.094874  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:06.095343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:06.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.594931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.596121  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:07.095769  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.095852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.096129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:07.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.595312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.594744  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.594830  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.595101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:08.595154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:09.094875  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:09.594884  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.594974  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.095326  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.095417  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.095739  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.324305  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:10.384998  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:10.385051  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.385071  890932 retry.go:31] will retry after 7.782157506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.595548  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.595897  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:10.595950  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:11.095753  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.095835  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:11.304608  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:11.367180  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:11.367234  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.367256  890932 retry.go:31] will retry after 13.123466664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.595455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.595807  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.095614  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.095694  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.095989  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.594814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:13.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.094906  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:13.095366  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:13.595620  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.094918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.095230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.594881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.595183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.094943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.095289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.594876  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.595270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:15.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:16.094820  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.094894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.095164  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:16.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.594908  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.595244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.095054  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.095138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.095471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.595816  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.595940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.596241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:17.596293  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:18.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.094955  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:18.168028  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:18.232113  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:18.232150  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.232169  890932 retry.go:31] will retry after 8.094581729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.595690  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.595775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.596183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.095628  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.096011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.594718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.594802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:20.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.095232  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:20.095311  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:20.595598  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.595793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.596357  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.095040  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.594912  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.595011  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.595362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.094750  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.094826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.095087  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.594811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:22.595315  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:23.094999  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.095088  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.095463  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:23.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.595136  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.094856  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.490869  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:24.557459  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:24.557507  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.557527  890932 retry.go:31] will retry after 14.933128441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.595841  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.595922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.596313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:24.596367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:25.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.095113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:25.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.595217  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.094904  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.095360  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.327725  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:26.388171  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:26.388210  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.388230  890932 retry.go:31] will retry after 17.607962094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.595498  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.595632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.595892  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:27.095752  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.095851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.096189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:27.096258  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:27.594738  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.095672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.096073  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.595257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.595836  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.595984  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.596331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:29.596385  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:30.095156  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.095252  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.095627  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:30.595556  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.596442  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.094732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.094808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.095102  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.594886  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.595210  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:32.094828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.095216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:32.095266  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:32.595760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.595841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.596354  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.095264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.594878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:34.094814  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.095244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:34.095287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:34.594940  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.595021  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.595365  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.094942  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.095029  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.595132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:36.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.094904  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.095255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:36.095316  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:36.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.595276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.095623  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.095696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.095973  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.594749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.594850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.595227  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:38.094987  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.095112  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:38.095555  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:38.595474  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.595556  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.595831  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.095726  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.095806  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.096148  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.491741  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:39.568327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:39.568372  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.568394  890932 retry.go:31] will retry after 16.95217324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.595718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.596632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.597031  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:40.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.095785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:40.096128  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:40.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.595175  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.094806  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.094893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.095209  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.595791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.596479  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.094922  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.095018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.095545  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.595373  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.595463  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.596518  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:31:42.596581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:43.095290  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.095363  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.095661  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.595732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.595812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.596157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.996743  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:44.061795  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:44.065597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.065636  890932 retry.go:31] will retry after 36.030777087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.094709  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.095134  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:44.595619  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.595689  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.596188  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:45.095192  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.095284  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:45.095814  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:45.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.595664  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.596700  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:46.095471  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.095564  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.095854  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:46.595665  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.595741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.596605  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:47.095443  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.095832  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:47.095881  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:47.595397  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.595480  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.595753  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.095688  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.095797  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.096203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.594869  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:49.095593  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.095675  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.096008  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:49.096067  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:49.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.594865  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.094833  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.596966  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:51.095757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.095841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:51.096238  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:51.594921  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.595014  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.595361  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.094793  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.095231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.594828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.094823  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.594757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.594827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.595090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:53.595131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:54.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.095337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:54.595028  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.595111  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.595443  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.095154  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.095659  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.595576  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.595659  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.595995  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:55.596040  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:56.094916  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:56.520835  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:56.580569  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580606  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580706  890932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:31:56.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.595785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.596127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.094846  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.594959  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.595375  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:58.095719  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.095802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.096233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:58.096313  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:58.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.095006  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.095098  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.095434  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.594763  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.594848  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.595114  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.095594  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.595570  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.596011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:00.596082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:01.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.096010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.595794  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.596311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:02.596371  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:03.095057  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.095145  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.095500  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:03.595367  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.595442  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.095519  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.095642  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.096000  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.595726  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.595814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.596263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:05.095616  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.095688  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.095960  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:05.096006  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:05.595742  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.595817  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.595654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.596003  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:07.095781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.095861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.096199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:07.096254  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:07.594824  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.594910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.094781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.095147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.595140  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.595213  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.595560  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.095144  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.095234  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.095578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.595126  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.595198  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.595458  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:09.595499  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.095157  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.095251  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.095657  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.595220  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.595297  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.595648  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.095385  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.095455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.095752  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.595492  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.595574  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.595922  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:11.595978  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.095855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.594787  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.595182  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.094907  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.094987  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.095332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.595577  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:13.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:14.095571  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.095649  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.095941  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.595772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.595853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.596231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.094898  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.095334  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.595180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:16.095326  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:16.595008  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.595092  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.595453  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.094788  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.095049  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.595151  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.094845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.595685  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.595803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:18.596225  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:19.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.095319  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.594891  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.594970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.595320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.094805  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.095201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.097611  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:20.173666  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173721  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173816  890932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:32:20.177110  890932 out.go:179] * Enabled addons: 
	I1208 00:32:20.180584  890932 addons.go:530] duration metric: took 1m32.061097112s for enable addons: enabled=[]
	I1208 00:32:20.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.595353  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.095445  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.095926  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:21.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.596006  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.094730  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.094810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.095155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.095654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.095734  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.096034  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.096082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.595243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.094924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.595670  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.595754  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.095811  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.095896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.096308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.096381  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:25.594842  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.095626  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.095702  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.095977  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.595770  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.596206  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.095271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:27.595194  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.095355  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:28.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.595399  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.095084  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.595122  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.595197  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.595539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:29.595597  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:30.095160  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.095253  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.595339  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.595701  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.095621  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.095959  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.595634  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.596065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:31.596120  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:32.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.595231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.094941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.594792  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:34.095348  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:34.595041  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.595122  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.095733  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.096082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.594826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.595179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.595680  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.595807  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.596074  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:36.596131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:37.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.094901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.095188  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.095665  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.594842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.595165  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.095320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:39.095377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:39.594776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.595118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.095374  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.595107  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.595184  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.595524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.094737  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.094813  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.594877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:41.595246  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.095448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.595773  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.595847  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.094911  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.095007  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.095748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.594832  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:32:43.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.095588  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.095673  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.095930  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.595731  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.595809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.094947  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.095058  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.095377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.595628  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.595984  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.095846  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.095977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.096455  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.096521  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:46.595187  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.595275  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.595599  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.095341  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.095628  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.595415  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.595489  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.095646  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.595037  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.595138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.595519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:48.595573  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.094988  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.095539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.595369  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.595785  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.095604  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.095687  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.595195  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.095341  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.595377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.096075  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.096116  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:53.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.095368  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.095450  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.095808  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.595724  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.596055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.094748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.094827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.595264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.095731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.096035  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.594732  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.594809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.094761  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.094840  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.595660  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.595745  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.596013  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:57.596066  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.094869  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.095204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.595193  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.595274  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.595658  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.095380  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.095459  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.095744  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.595525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.595604  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.595972  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.094758  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.095388  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.595177  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.594943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.595298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.094875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.594959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.595287  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.595345  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:03.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.095115  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.595322  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.595402  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.095474  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.095555  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.095896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.595687  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.595771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.596109  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.596164  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:05.095698  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.095821  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.096157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.594941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.595621  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.595983  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.095871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.096215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:07.096271  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.094792  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.094883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.095200  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.594923  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.094834  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.094921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.095276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.595681  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.595759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.596030  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:09.596071  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:10.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.095180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.595171  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.094783  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.095169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.594859  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:12.095394  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:12.595611  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.595686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.595968  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.095748  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.095829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.594964  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.595409  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.095085  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.095492  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:14.095548  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:14.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.094998  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.095079  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.095428  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.595113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.095424  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.595117  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.595199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:16.595612  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.095280  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.095347  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.095662  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.595238  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.595324  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.595678  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.095532  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.095611  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.095982  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.595098  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.094863  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.095323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.095387  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.095680  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.594771  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.094914  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.095000  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.095330  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.594854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.595147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.595196  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.094961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.594844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.594926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.095693  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.095771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.595065  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.595151  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:23.595587  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.095271  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.095360  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.595131  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.595202  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.595547  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.095305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.595099  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.095122  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.095199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.095487  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.095533  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.594948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.095043  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.095128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.595145  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.595211  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.595478  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.095182  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.095261  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.095626  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.095683  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:28.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.595718  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.596082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.096085  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.595201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.095390  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.595110  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.595152  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.094935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.594832  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.594907  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.595282  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.595155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.094850  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.594897  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.594986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.595405  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.595460  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.094996  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.095074  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.095402  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.595458  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.096170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.595656  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.596020  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:36.596064  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.095648  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.095004  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.095492  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.595152  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.595232  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.595511  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.095291  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.594994  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.595449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.095710  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.095987  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.096029  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:41.595774  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.595854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.094945  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.095447  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.594810  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.594880  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.095281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.595149  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.595226  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.595578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:43.595639  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.095775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.096055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.594909  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.595246  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.595704  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.595779  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:45.596188  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.095300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.094796  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.094870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.095149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.594854  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.095000  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.095085  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.095460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.095511  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:48.595390  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.595476  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.595748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.095572  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.095647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.095999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.595785  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.596224  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.095203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.594890  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.595313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:50.595368  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.094876  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.095346  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.595652  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.095805  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.096192  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.594926  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.595378  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:52.595433  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.094786  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.094881  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.094965  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.595582  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.595660  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.595948  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:54.595991  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:55.095773  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.095890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.096222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.095686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.095975  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.595757  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.595833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.596223  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:56.596285  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:57.094855  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.095265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.595601  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.595670  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.595954  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.095735  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.095811  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.096159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.594840  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.594919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.095680  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.095963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:59.096015  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:59.595776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.595860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.596187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.094948  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.095044  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.594807  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.595187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.594909  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.595385  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:01.595446  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:02.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.095145  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.594938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.095022  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.095477  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.595437  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.595711  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:03.595753  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:04.095511  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.095589  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.095964  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.595805  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.595893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.596256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.094892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.095280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.595525  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:06.095367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:06.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.595230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.095222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.594993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.595353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.095670  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.095741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:08.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:08.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.595235  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.094980  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.595609  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.595691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.595986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.594928  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.595018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.595327  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:10.595376  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:11.094812  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.094900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.595130  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.094818  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.094897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:13.095308  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:13.594998  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.595102  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.595450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.095713  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.095782  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.595722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.595804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.094925  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.095362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:15.095419  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:15.595024  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.595091  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.595369  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.094880  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.595018  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.595096  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.595400  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.095070  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.095425  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:17.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:17.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.594950  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.595419  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.094891  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.094971  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.595365  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.595444  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.595738  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.095230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.095306  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.095655  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:19.095709  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.595477  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.595895  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.095714  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.096185  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.595647  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.595727  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.596033  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:21.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:22.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.594975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.595067  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.595475  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.094995  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.095075  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.095444  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.095501  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:24.595780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.595858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.596186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.594923  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.595001  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.595307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:26.595299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.094975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.595139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.095238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.595230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.595311  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.595664  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:28.595719  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:29.095432  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.095508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.095787  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.595520  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.595592  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.595939  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.095902  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.096081  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.096554  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.595239  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.595307  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.595584  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.095490  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.095572  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.095910  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:31.095974  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:31.595725  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.595808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.095639  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.095709  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.595814  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.595902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.596323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.094963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.095317  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.595336  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.595409  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:33.595717  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.095551  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.095629  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.095979  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.595790  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.595304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.094956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.095379  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:36.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.595663  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.595963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.095729  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.095810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.096161  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.594967  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.096015  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.096058  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:38.595114  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.595196  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.595553  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.095363  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.095447  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.095793  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.595427  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.595881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.095722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.095803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.096152  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:40.096206  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:40.594823  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.094864  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.594936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.095093  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.095185  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.095576  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.595787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.596105  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:42.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.094873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.594881  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.595259  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.594969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.595448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.094892  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.095006  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.095381  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.095448  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:45.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.595342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.094959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.095359  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.594941  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.595022  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.094744  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.094819  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.095101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.594892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:47.595343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.095015  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.095449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.595544  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.595623  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.595896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.095113  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.095208  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.095687  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.595013  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.595100  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.595440  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:49.595488  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.095418  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.095709  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.595564  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.595644  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.094776  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.094860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.095248  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.595659  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.596066  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:51.596155  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:52.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.595055  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.595136  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.595494  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.094787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.095068  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.595063  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.595142  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.595460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:54.095354  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:54.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.595721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.596067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.094878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.095629  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.095704  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.096027  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:56.096075  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:56.594762  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.594845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.094921  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.095004  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.595149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.595381  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.595460  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.595812  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:58.595863  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:59.095337  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.095413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.095693  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.595446  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.595531  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.595875  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.095754  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.096197  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.595251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.095251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:01.095307  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:01.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.594896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.595215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.094762  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.094836  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.095122  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.094822  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.094902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.595638  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.595707  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.595996  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:03.596048  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:04.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.095254  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.594969  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.595053  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.595398  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.095017  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.095353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.095490  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:06.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.594852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.595138  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.095278  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.595328  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.595759  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.095454  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.095798  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:08.095845  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:08.594864  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.595176  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.094991  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.095347  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.595069  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.595526  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:10.595586  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:11.095553  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.095640  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.095940  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.094800  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.094884  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.594901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.595178  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.094902  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.094979  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:13.095363  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:13.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.595301  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.595667  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.095197  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.095270  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.095550  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.595257  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.595339  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.595725  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.095585  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.096126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:15.096187  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.594862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.595129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.094866  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.595015  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.094867  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.594835  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.595233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:17.595290  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:18.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.595435  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.595508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.595780  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.096078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.595160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.094736  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.094818  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.095118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:20.095178  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:20.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.095027  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.094877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.095219  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:22.095276  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:22.594927  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.595337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.094807  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.094885  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.594870  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.595296  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:24.095323  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.596024  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.095316  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.594932  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.595268  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.095967  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:26.096009  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.594774  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.595220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.095279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.595172  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.594890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:28.595297  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:29.095621  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.594756  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.594833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.595168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.094972  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.095501  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.594791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.094879  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:31.095357  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:31.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.094855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.594962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.595305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.095132  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.095524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:33.095581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.594888  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.594964  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.595242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.095392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.594946  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.595024  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.595376  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.095094  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.095178  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.095522  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.594940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.595291  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:36.094854  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.095261  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.594784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.594860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.595344  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:38.095615  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.095691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.095993  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.094849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.594893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.595159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.094832  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.094914  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:40.095383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.595050  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.595133  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.095165  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.095247  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.595432  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.595533  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.595908  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.095710  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.096304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.096383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.594857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.595136  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.595212  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.595549  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.095717  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.095796  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.096072  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.595340  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.595058  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.595128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.595471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.095186  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.095266  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.595402  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.595481  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.595824  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.595879  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.595618  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.595696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.095799  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.096202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.595337  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.595413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.595706  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.095444  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.095524  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.095902  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.095961  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.595540  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.595625  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.595976  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.095709  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.096095  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.594799  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.094959  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.095064  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.095433  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.594801  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.595287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.094975  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.095331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.595042  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.595124  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.595480  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.095139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.594836  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.595282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.595384  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.094872  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.095335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.595658  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.596021  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.094747  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.094842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.095674  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.096062  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.096108  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.594963  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.595039  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.595371  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.094934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.594883  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.594996  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.595394  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.095103  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.095186  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.095515  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.596169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.596227  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:59.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.096039  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.594804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.595133  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.094929  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.095015  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.095342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.595183  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.595265  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.595623  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.095438  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.095859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:01.095916  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.595708  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.596080  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.594816  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.595265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.594768  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:03.595263  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:04.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.594813  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.594897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.095720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.595766  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.596299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:06.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.095304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.595639  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.595720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.596054  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.094760  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.094857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.594898  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.594972  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.595325  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.095635  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.095713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.095986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:08.096028  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:08.595142  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.595227  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.595555  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.095287  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.095364  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.095690  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.595392  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.595461  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.095517  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.595700  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.595784  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:10.596216  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:11.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.094850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.595266  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.094980  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.095061  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.095386  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.595126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.094912  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:13.095347  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:13.595000  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.595104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.595437  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.095097  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.095172  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.095450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.595177  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.595281  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.595679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.095515  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.095616  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:15.096068  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:15.595565  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.595677  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.595994  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.094724  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.094815  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.095174  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.594934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.594829  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:17.595272  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:18.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.594793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.595065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.094767  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.595263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.595322  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:20.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.096099  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.594887  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.094954  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.095363  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.595677  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.595750  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.596077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.596147  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:22.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.094938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.095256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.095643  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.095723  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.096019  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.595048  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.595147  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.595567  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.095396  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.095478  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:24.095979  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:24.595457  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.595529  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.095586  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.095668  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.595744  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.595838  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.596274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.095659  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.096092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:26.096144  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:26.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.094928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.095252  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.594876  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.595170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.595430  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.595768  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:28.595815  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:29.095242  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.095310  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.095629  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.595200  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.595280  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.595637  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.095238  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.095745  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.595479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.595834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.595883  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:31.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.095759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.096119  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.594834  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.594916  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.595238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.094743  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.094812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.095077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.595202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.094957  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.095413  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:33.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.595484  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.595560  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.595823  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.095676  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.095765  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.096127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.094778  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.594899  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.594983  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.595332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.595390  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:36.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.095205  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.095569  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.595347  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.595414  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.095479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.095557  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.095923  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.596092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.596146  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:38.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.095685  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.095965  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.595156  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.595538  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.095256  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.095679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.595429  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.595772  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.095636  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.095721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.096088  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:40.096154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.594798  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.094802  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.095218  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.594911  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.594990  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.094885  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.094978  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.095379  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.595088  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.595162  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.595436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.595481  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:43.094831  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.095253  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.595271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.095602  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.095672  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.595789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.595878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.596229  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.596286  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:45.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.095095  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.095519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.594961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.595315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.594993  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.595451  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.094789  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.095057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:47.095099  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.594746  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.595163  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.095765  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.096257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.594789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.094763  890932 type.go:168] "Request Body" body=""
	I1208 00:36:49.094842  890932 node_ready.go:38] duration metric: took 6m0.000209264s for node "functional-386544" to be "Ready" ...
	I1208 00:36:49.097838  890932 out.go:203] 
	W1208 00:36:49.100712  890932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:36:49.100735  890932 out.go:285] * 
	W1208 00:36:49.102896  890932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:36:49.105576  890932 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038651896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038674026Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038731429Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038749251Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038760107Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038773202Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038782999Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038794954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038811882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038846729Z" level=info msg="Connect containerd service"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.039475923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.040148999Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051184448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051253471Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051289894Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051318284Z" level=info msg="Start recovering state"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097113159Z" level=info msg="Start event monitor"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097165779Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097179465Z" level=info msg="Start streaming server"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097189672Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097198969Z" level=info msg="runtime interface starting up..."
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097205484Z" level=info msg="starting plugins..."
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097218112Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097506723Z" level=info msg="containerd successfully booted in 0.085148s"
	Dec 08 00:30:46 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:36:50.911941    8509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:50.912663    8509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:50.914302    8509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:50.914870    8509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:50.916276    8509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:36:50 up  5:19,  0 user,  load average: 0.21, 0.38, 1.05
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:36:47 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:48 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 08 00:36:48 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:48 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:48 functional-386544 kubelet[8391]: E1208 00:36:48.642955    8391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:48 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:48 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:49 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 08 00:36:49 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:49 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:49 functional-386544 kubelet[8397]: E1208 00:36:49.401475    8397 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:49 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:49 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 08 00:36:50 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:50 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:50 functional-386544 kubelet[8418]: E1208 00:36:50.155515    8418 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 08 00:36:50 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:50 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:50 functional-386544 kubelet[8503]: E1208 00:36:50.902254    8503 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (390.447332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-386544 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-386544 get po -A: exit status 1 (58.744158ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-386544 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-386544 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-386544 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (323.03278ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-932121 ssh sudo cat /usr/share/ca-certificates/8467112.pem                                                                                           │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ update-context │ functional-932121 update-context --alsologtostderr -v=2                                                                                                         │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image save kicbase/echo-server:functional-932121 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image rm kicbase/echo-server:functional-932121 --alsologtostderr                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image save --daemon kicbase/echo-server:functional-932121 --alsologtostderr                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format short --alsologtostderr                                                                                                     │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format yaml --alsologtostderr                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format json --alsologtostderr                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls --format table --alsologtostderr                                                                                                     │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh            │ functional-932121 ssh pgrep buildkitd                                                                                                                           │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image          │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                          │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image          │ functional-932121 image ls                                                                                                                                      │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete         │ -p functional-932121                                                                                                                                            │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start          │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ start          │ -p functional-386544 --alsologtostderr -v=8                                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:30 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:30:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:30:43.106195  890932 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:30:43.106412  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106440  890932 out.go:374] Setting ErrFile to fd 2...
	I1208 00:30:43.106489  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106802  890932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:30:43.107327  890932 out.go:368] Setting JSON to false
	I1208 00:30:43.108252  890932 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18796,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:30:43.108353  890932 start.go:143] virtualization:  
	I1208 00:30:43.111927  890932 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:30:43.114895  890932 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:30:43.114974  890932 notify.go:221] Checking for updates...
	I1208 00:30:43.121042  890932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:30:43.124118  890932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:43.127146  890932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:30:43.130017  890932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:30:43.132953  890932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:30:43.136385  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:43.136518  890932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:30:43.171722  890932 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:30:43.171844  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.232988  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.222800102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.233101  890932 docker.go:319] overlay module found
	I1208 00:30:43.236209  890932 out.go:179] * Using the docker driver based on existing profile
	I1208 00:30:43.239024  890932 start.go:309] selected driver: docker
	I1208 00:30:43.239046  890932 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.240193  890932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:30:43.240306  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.299458  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.288388391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.299888  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:43.299955  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:43.300012  890932 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.303163  890932 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:30:43.305985  890932 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:30:43.309025  890932 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:30:43.312042  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:43.312102  890932 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:30:43.312113  890932 cache.go:65] Caching tarball of preloaded images
	I1208 00:30:43.312160  890932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:30:43.312254  890932 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:30:43.312266  890932 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:30:43.312379  890932 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:30:43.332475  890932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:30:43.332500  890932 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:30:43.332516  890932 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:30:43.332550  890932 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:30:43.332614  890932 start.go:364] duration metric: took 40.517µs to acquireMachinesLock for "functional-386544"
	I1208 00:30:43.332637  890932 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:30:43.332643  890932 fix.go:54] fixHost starting: 
	I1208 00:30:43.332918  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:43.364362  890932 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:30:43.364391  890932 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:30:43.367522  890932 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:30:43.367561  890932 machine.go:94] provisionDockerMachine start ...
	I1208 00:30:43.367667  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.390594  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.390943  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.390953  890932 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:30:43.546039  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.546064  890932 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:30:43.546132  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.563909  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.564221  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.564240  890932 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:30:43.728055  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.728136  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.746428  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.746778  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.746805  890932 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:30:43.898980  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:30:43.899007  890932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:30:43.899068  890932 ubuntu.go:190] setting up certificates
	I1208 00:30:43.899078  890932 provision.go:84] configureAuth start
	I1208 00:30:43.899155  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:43.917225  890932 provision.go:143] copyHostCerts
	I1208 00:30:43.917271  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917317  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:30:43.917335  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917414  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:30:43.917515  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917537  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:30:43.917547  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917575  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:30:43.917632  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917656  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:30:43.917664  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917691  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:30:43.917796  890932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:30:44.201729  890932 provision.go:177] copyRemoteCerts
	I1208 00:30:44.201799  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:30:44.201847  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.218852  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.326622  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:30:44.326687  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:30:44.345138  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:30:44.345250  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:30:44.363475  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:30:44.363575  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:30:44.382571  890932 provision.go:87] duration metric: took 483.468304ms to configureAuth
	I1208 00:30:44.382643  890932 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:30:44.382843  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:44.382857  890932 machine.go:97] duration metric: took 1.015288541s to provisionDockerMachine
	I1208 00:30:44.382865  890932 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:30:44.382880  890932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:30:44.382939  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:30:44.382987  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.401380  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.506846  890932 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:30:44.510586  890932 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:30:44.510612  890932 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:30:44.510623  890932 command_runner.go:130] > VERSION_ID="12"
	I1208 00:30:44.510628  890932 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:30:44.510633  890932 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:30:44.510637  890932 command_runner.go:130] > ID=debian
	I1208 00:30:44.510641  890932 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:30:44.510646  890932 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:30:44.510652  890932 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:30:44.510734  890932 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:30:44.510755  890932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:30:44.510768  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:30:44.510833  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:30:44.510921  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:30:44.510932  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /etc/ssl/certs/8467112.pem
	I1208 00:30:44.511028  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:30:44.511037  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> /etc/test/nested/copy/846711/hosts
	I1208 00:30:44.511082  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:30:44.518977  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:44.538494  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:30:44.556928  890932 start.go:296] duration metric: took 174.046033ms for postStartSetup
	I1208 00:30:44.557012  890932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:30:44.557057  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.579278  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.683552  890932 command_runner.go:130] > 11%
	I1208 00:30:44.683622  890932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:30:44.688016  890932 command_runner.go:130] > 174G
	I1208 00:30:44.688056  890932 fix.go:56] duration metric: took 1.355411206s for fixHost
	I1208 00:30:44.688067  890932 start.go:83] releasing machines lock for "functional-386544", held for 1.355443108s
	I1208 00:30:44.688146  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:44.705277  890932 ssh_runner.go:195] Run: cat /version.json
	I1208 00:30:44.705345  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.705617  890932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:30:44.705687  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.723084  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.728238  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.826153  890932 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:30:44.826300  890932 ssh_runner.go:195] Run: systemctl --version
	I1208 00:30:44.917784  890932 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:30:44.920412  890932 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:30:44.920484  890932 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:30:44.920574  890932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:30:44.924900  890932 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:30:44.925095  890932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:30:44.925215  890932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:30:44.933474  890932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:30:44.933497  890932 start.go:496] detecting cgroup driver to use...
	I1208 00:30:44.933530  890932 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:30:44.933580  890932 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:30:44.950010  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:30:44.963687  890932 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:30:44.963783  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:30:44.980391  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:30:44.994304  890932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:30:45.255981  890932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:30:45.407305  890932 docker.go:234] disabling docker service ...
	I1208 00:30:45.407423  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:30:45.423468  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:30:45.437222  890932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:30:45.561603  890932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:30:45.705878  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:30:45.719726  890932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:30:45.733506  890932 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1208 00:30:45.735147  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:30:45.744694  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:30:45.753960  890932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:30:45.754081  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:30:45.763511  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.772723  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:30:45.781584  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.790600  890932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:30:45.799135  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:30:45.808317  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:30:45.817244  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:30:45.826211  890932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:30:45.833037  890932 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:30:45.834008  890932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:30:45.841603  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:45.965344  890932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:30:46.100261  890932 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:30:46.100385  890932 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:30:46.104210  890932 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1208 00:30:46.104295  890932 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:30:46.104358  890932 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1208 00:30:46.104385  890932 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:46.104410  890932 command_runner.go:130] > Access: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104446  890932 command_runner.go:130] > Modify: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104470  890932 command_runner.go:130] > Change: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104490  890932 command_runner.go:130] >  Birth: -
	I1208 00:30:46.104859  890932 start.go:564] Will wait 60s for crictl version
	I1208 00:30:46.104961  890932 ssh_runner.go:195] Run: which crictl
	I1208 00:30:46.108543  890932 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:30:46.108924  890932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:30:46.136367  890932 command_runner.go:130] > Version:  0.1.0
	I1208 00:30:46.136449  890932 command_runner.go:130] > RuntimeName:  containerd
	I1208 00:30:46.136470  890932 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1208 00:30:46.136491  890932 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:30:46.136542  890932 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:30:46.136636  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.156742  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.159302  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.181269  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.189080  890932 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:30:46.192076  890932 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:30:46.209081  890932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:30:46.212923  890932 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:30:46.213097  890932 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:30:46.213209  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:46.213289  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.236482  890932 command_runner.go:130] > {
	I1208 00:30:46.236506  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.236511  890932 command_runner.go:130] >     {
	I1208 00:30:46.236520  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.236526  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236531  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.236534  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236538  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236551  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.236558  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236563  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.236571  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236576  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236586  890932 command_runner.go:130] >     },
	I1208 00:30:46.236590  890932 command_runner.go:130] >     {
	I1208 00:30:46.236601  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.236605  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236610  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.236617  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236622  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236632  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.236641  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236646  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.236649  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236654  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236664  890932 command_runner.go:130] >     },
	I1208 00:30:46.236668  890932 command_runner.go:130] >     {
	I1208 00:30:46.236675  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.236679  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236687  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.236690  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236699  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236718  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.236722  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236728  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.236733  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.236740  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236743  890932 command_runner.go:130] >     },
	I1208 00:30:46.236746  890932 command_runner.go:130] >     {
	I1208 00:30:46.236753  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.236760  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236766  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.236769  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236773  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236781  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.236788  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236792  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.236803  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236808  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236814  890932 command_runner.go:130] >       },
	I1208 00:30:46.236821  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236825  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236828  890932 command_runner.go:130] >     },
	I1208 00:30:46.236832  890932 command_runner.go:130] >     {
	I1208 00:30:46.236841  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.236847  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236853  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.236856  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236860  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236868  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.236874  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236879  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.236885  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236894  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236901  890932 command_runner.go:130] >       },
	I1208 00:30:46.236908  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236912  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236916  890932 command_runner.go:130] >     },
	I1208 00:30:46.236926  890932 command_runner.go:130] >     {
	I1208 00:30:46.236933  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.236937  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236943  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.236947  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236951  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236962  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.236968  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236972  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.236976  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236980  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236989  890932 command_runner.go:130] >       },
	I1208 00:30:46.236993  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236997  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237002  890932 command_runner.go:130] >     },
	I1208 00:30:46.237005  890932 command_runner.go:130] >     {
	I1208 00:30:46.237012  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.237017  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237024  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.237027  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237032  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237040  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.237047  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237055  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.237059  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237063  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237066  890932 command_runner.go:130] >     },
	I1208 00:30:46.237076  890932 command_runner.go:130] >     {
	I1208 00:30:46.237084  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.237095  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237104  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.237107  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237112  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237120  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.237126  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237131  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.237134  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237139  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.237142  890932 command_runner.go:130] >       },
	I1208 00:30:46.237146  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237153  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237157  890932 command_runner.go:130] >     },
	I1208 00:30:46.237166  890932 command_runner.go:130] >     {
	I1208 00:30:46.237173  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.237178  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237182  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.237189  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237193  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237201  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.237206  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237210  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.237216  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237221  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.237227  890932 command_runner.go:130] >       },
	I1208 00:30:46.237231  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237235  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.237238  890932 command_runner.go:130] >     }
	I1208 00:30:46.237241  890932 command_runner.go:130] >   ]
	I1208 00:30:46.237244  890932 command_runner.go:130] > }
	I1208 00:30:46.239834  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.239857  890932 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:30:46.239919  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.262227  890932 command_runner.go:130] > {
	I1208 00:30:46.262250  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.262255  890932 command_runner.go:130] >     {
	I1208 00:30:46.262265  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.262280  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262286  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.262289  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262293  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262303  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.262310  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262315  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.262319  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262323  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262326  890932 command_runner.go:130] >     },
	I1208 00:30:46.262330  890932 command_runner.go:130] >     {
	I1208 00:30:46.262348  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.262357  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262363  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.262366  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262370  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262381  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.262386  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262392  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.262396  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262400  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262403  890932 command_runner.go:130] >     },
	I1208 00:30:46.262406  890932 command_runner.go:130] >     {
	I1208 00:30:46.262413  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.262427  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262439  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.262476  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262489  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262498  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.262502  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262506  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.262513  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.262517  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262524  890932 command_runner.go:130] >     },
	I1208 00:30:46.262531  890932 command_runner.go:130] >     {
	I1208 00:30:46.262539  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.262542  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262548  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.262553  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262557  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262565  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.262568  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262572  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.262579  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262583  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262588  890932 command_runner.go:130] >       },
	I1208 00:30:46.262592  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262605  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262609  890932 command_runner.go:130] >     },
	I1208 00:30:46.262612  890932 command_runner.go:130] >     {
	I1208 00:30:46.262619  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.262625  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262631  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.262634  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262638  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262646  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.262649  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262654  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.262660  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262678  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262686  890932 command_runner.go:130] >       },
	I1208 00:30:46.262690  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262694  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262697  890932 command_runner.go:130] >     },
	I1208 00:30:46.262701  890932 command_runner.go:130] >     {
	I1208 00:30:46.262707  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.262718  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262724  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.262727  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262731  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262739  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.262745  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262749  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.262755  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262759  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262772  890932 command_runner.go:130] >       },
	I1208 00:30:46.262776  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262780  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262783  890932 command_runner.go:130] >     },
	I1208 00:30:46.262786  890932 command_runner.go:130] >     {
	I1208 00:30:46.262793  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.262800  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262805  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.262809  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262812  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262819  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.262823  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262827  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.262834  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262838  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262844  890932 command_runner.go:130] >     },
	I1208 00:30:46.262848  890932 command_runner.go:130] >     {
	I1208 00:30:46.262857  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.262867  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262876  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.262882  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262886  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262893  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.262907  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262915  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.262919  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262922  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262929  890932 command_runner.go:130] >       },
	I1208 00:30:46.262933  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262943  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262947  890932 command_runner.go:130] >     },
	I1208 00:30:46.262950  890932 command_runner.go:130] >     {
	I1208 00:30:46.262957  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.262963  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262968  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.262971  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262975  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262982  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.262985  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262990  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.262996  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.263000  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.263013  890932 command_runner.go:130] >       },
	I1208 00:30:46.263017  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.263021  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.263024  890932 command_runner.go:130] >     }
	I1208 00:30:46.263027  890932 command_runner.go:130] >   ]
	I1208 00:30:46.263031  890932 command_runner.go:130] > }
	I1208 00:30:46.265493  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.265517  890932 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:30:46.265524  890932 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:30:46.265625  890932 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:30:46.265699  890932 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:30:46.291229  890932 command_runner.go:130] > {
	I1208 00:30:46.291250  890932 command_runner.go:130] >   "cniconfig": {
	I1208 00:30:46.291256  890932 command_runner.go:130] >     "Networks": [
	I1208 00:30:46.291260  890932 command_runner.go:130] >       {
	I1208 00:30:46.291266  890932 command_runner.go:130] >         "Config": {
	I1208 00:30:46.291271  890932 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1208 00:30:46.291283  890932 command_runner.go:130] >           "Name": "cni-loopback",
	I1208 00:30:46.291288  890932 command_runner.go:130] >           "Plugins": [
	I1208 00:30:46.291292  890932 command_runner.go:130] >             {
	I1208 00:30:46.291297  890932 command_runner.go:130] >               "Network": {
	I1208 00:30:46.291301  890932 command_runner.go:130] >                 "ipam": {},
	I1208 00:30:46.291307  890932 command_runner.go:130] >                 "type": "loopback"
	I1208 00:30:46.291311  890932 command_runner.go:130] >               },
	I1208 00:30:46.291322  890932 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1208 00:30:46.291326  890932 command_runner.go:130] >             }
	I1208 00:30:46.291334  890932 command_runner.go:130] >           ],
	I1208 00:30:46.291344  890932 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1208 00:30:46.291348  890932 command_runner.go:130] >         },
	I1208 00:30:46.291356  890932 command_runner.go:130] >         "IFName": "lo"
	I1208 00:30:46.291362  890932 command_runner.go:130] >       }
	I1208 00:30:46.291366  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291371  890932 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1208 00:30:46.291375  890932 command_runner.go:130] >     "PluginDirs": [
	I1208 00:30:46.291379  890932 command_runner.go:130] >       "/opt/cni/bin"
	I1208 00:30:46.291390  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291395  890932 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1208 00:30:46.291398  890932 command_runner.go:130] >     "Prefix": "eth"
	I1208 00:30:46.291402  890932 command_runner.go:130] >   },
	I1208 00:30:46.291411  890932 command_runner.go:130] >   "config": {
	I1208 00:30:46.291415  890932 command_runner.go:130] >     "cdiSpecDirs": [
	I1208 00:30:46.291419  890932 command_runner.go:130] >       "/etc/cdi",
	I1208 00:30:46.291427  890932 command_runner.go:130] >       "/var/run/cdi"
	I1208 00:30:46.291432  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291436  890932 command_runner.go:130] >     "cni": {
	I1208 00:30:46.291448  890932 command_runner.go:130] >       "binDir": "",
	I1208 00:30:46.291453  890932 command_runner.go:130] >       "binDirs": [
	I1208 00:30:46.291457  890932 command_runner.go:130] >         "/opt/cni/bin"
	I1208 00:30:46.291460  890932 command_runner.go:130] >       ],
	I1208 00:30:46.291464  890932 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1208 00:30:46.291468  890932 command_runner.go:130] >       "confTemplate": "",
	I1208 00:30:46.291472  890932 command_runner.go:130] >       "ipPref": "",
	I1208 00:30:46.291475  890932 command_runner.go:130] >       "maxConfNum": 1,
	I1208 00:30:46.291479  890932 command_runner.go:130] >       "setupSerially": false,
	I1208 00:30:46.291483  890932 command_runner.go:130] >       "useInternalLoopback": false
	I1208 00:30:46.291487  890932 command_runner.go:130] >     },
	I1208 00:30:46.291492  890932 command_runner.go:130] >     "containerd": {
	I1208 00:30:46.291499  890932 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1208 00:30:46.291504  890932 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1208 00:30:46.291509  890932 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1208 00:30:46.291515  890932 command_runner.go:130] >       "runtimes": {
	I1208 00:30:46.291519  890932 command_runner.go:130] >         "runc": {
	I1208 00:30:46.291527  890932 command_runner.go:130] >           "ContainerAnnotations": null,
	I1208 00:30:46.291533  890932 command_runner.go:130] >           "PodAnnotations": null,
	I1208 00:30:46.291545  890932 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1208 00:30:46.291550  890932 command_runner.go:130] >           "cgroupWritable": false,
	I1208 00:30:46.291554  890932 command_runner.go:130] >           "cniConfDir": "",
	I1208 00:30:46.291558  890932 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1208 00:30:46.291564  890932 command_runner.go:130] >           "io_type": "",
	I1208 00:30:46.291568  890932 command_runner.go:130] >           "options": {
	I1208 00:30:46.291576  890932 command_runner.go:130] >             "BinaryName": "",
	I1208 00:30:46.291580  890932 command_runner.go:130] >             "CriuImagePath": "",
	I1208 00:30:46.291588  890932 command_runner.go:130] >             "CriuWorkPath": "",
	I1208 00:30:46.291593  890932 command_runner.go:130] >             "IoGid": 0,
	I1208 00:30:46.291599  890932 command_runner.go:130] >             "IoUid": 0,
	I1208 00:30:46.291604  890932 command_runner.go:130] >             "NoNewKeyring": false,
	I1208 00:30:46.291615  890932 command_runner.go:130] >             "Root": "",
	I1208 00:30:46.291619  890932 command_runner.go:130] >             "ShimCgroup": "",
	I1208 00:30:46.291624  890932 command_runner.go:130] >             "SystemdCgroup": false
	I1208 00:30:46.291627  890932 command_runner.go:130] >           },
	I1208 00:30:46.291641  890932 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1208 00:30:46.291648  890932 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1208 00:30:46.291655  890932 command_runner.go:130] >           "runtimePath": "",
	I1208 00:30:46.291660  890932 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1208 00:30:46.291664  890932 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1208 00:30:46.291668  890932 command_runner.go:130] >           "snapshotter": ""
	I1208 00:30:46.291672  890932 command_runner.go:130] >         }
	I1208 00:30:46.291675  890932 command_runner.go:130] >       }
	I1208 00:30:46.291678  890932 command_runner.go:130] >     },
	I1208 00:30:46.291689  890932 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1208 00:30:46.291698  890932 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1208 00:30:46.291705  890932 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1208 00:30:46.291709  890932 command_runner.go:130] >     "disableApparmor": false,
	I1208 00:30:46.291714  890932 command_runner.go:130] >     "disableHugetlbController": true,
	I1208 00:30:46.291721  890932 command_runner.go:130] >     "disableProcMount": false,
	I1208 00:30:46.291726  890932 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1208 00:30:46.291730  890932 command_runner.go:130] >     "enableCDI": true,
	I1208 00:30:46.291740  890932 command_runner.go:130] >     "enableSelinux": false,
	I1208 00:30:46.291745  890932 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1208 00:30:46.291749  890932 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1208 00:30:46.291753  890932 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1208 00:30:46.291758  890932 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1208 00:30:46.291763  890932 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1208 00:30:46.291770  890932 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1208 00:30:46.291775  890932 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1208 00:30:46.291789  890932 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291798  890932 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1208 00:30:46.291803  890932 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291810  890932 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1208 00:30:46.291819  890932 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1208 00:30:46.291823  890932 command_runner.go:130] >   },
	I1208 00:30:46.291827  890932 command_runner.go:130] >   "features": {
	I1208 00:30:46.291831  890932 command_runner.go:130] >     "supplemental_groups_policy": true
	I1208 00:30:46.291835  890932 command_runner.go:130] >   },
	I1208 00:30:46.291839  890932 command_runner.go:130] >   "golang": "go1.24.9",
	I1208 00:30:46.291850  890932 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291862  890932 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291866  890932 command_runner.go:130] >   "runtimeHandlers": [
	I1208 00:30:46.291870  890932 command_runner.go:130] >     {
	I1208 00:30:46.291874  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291886  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291890  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291893  890932 command_runner.go:130] >       }
	I1208 00:30:46.291897  890932 command_runner.go:130] >     },
	I1208 00:30:46.291907  890932 command_runner.go:130] >     {
	I1208 00:30:46.291911  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291916  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291919  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291922  890932 command_runner.go:130] >       },
	I1208 00:30:46.291926  890932 command_runner.go:130] >       "name": "runc"
	I1208 00:30:46.291930  890932 command_runner.go:130] >     }
	I1208 00:30:46.291939  890932 command_runner.go:130] >   ],
	I1208 00:30:46.291952  890932 command_runner.go:130] >   "status": {
	I1208 00:30:46.291955  890932 command_runner.go:130] >     "conditions": [
	I1208 00:30:46.291959  890932 command_runner.go:130] >       {
	I1208 00:30:46.291962  890932 command_runner.go:130] >         "message": "",
	I1208 00:30:46.291966  890932 command_runner.go:130] >         "reason": "",
	I1208 00:30:46.291973  890932 command_runner.go:130] >         "status": true,
	I1208 00:30:46.291983  890932 command_runner.go:130] >         "type": "RuntimeReady"
	I1208 00:30:46.291990  890932 command_runner.go:130] >       },
	I1208 00:30:46.291993  890932 command_runner.go:130] >       {
	I1208 00:30:46.292000  890932 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1208 00:30:46.292004  890932 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1208 00:30:46.292009  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292013  890932 command_runner.go:130] >         "type": "NetworkReady"
	I1208 00:30:46.292019  890932 command_runner.go:130] >       },
	I1208 00:30:46.292022  890932 command_runner.go:130] >       {
	I1208 00:30:46.292047  890932 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1208 00:30:46.292057  890932 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1208 00:30:46.292063  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292068  890932 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1208 00:30:46.292074  890932 command_runner.go:130] >       }
	I1208 00:30:46.292077  890932 command_runner.go:130] >     ]
	I1208 00:30:46.292080  890932 command_runner.go:130] >   }
	I1208 00:30:46.292083  890932 command_runner.go:130] > }
	I1208 00:30:46.295037  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:46.295064  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:46.295108  890932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:30:46.295135  890932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:30:46.295307  890932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:30:46.295389  890932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:30:46.302776  890932 command_runner.go:130] > kubeadm
	I1208 00:30:46.302853  890932 command_runner.go:130] > kubectl
	I1208 00:30:46.302863  890932 command_runner.go:130] > kubelet
	I1208 00:30:46.303600  890932 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:30:46.303710  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:30:46.311760  890932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:30:46.325760  890932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:30:46.340134  890932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 00:30:46.359100  890932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:30:46.362934  890932 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:30:46.363653  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:46.491856  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:47.343005  890932 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:30:47.343028  890932 certs.go:195] generating shared ca certs ...
	I1208 00:30:47.343054  890932 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:47.343240  890932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:30:47.343312  890932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:30:47.343326  890932 certs.go:257] generating profile certs ...
	I1208 00:30:47.343460  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:30:47.343536  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:30:47.343590  890932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:30:47.343612  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:30:47.343630  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:30:47.343655  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:30:47.343671  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:30:47.343691  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:30:47.343706  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:30:47.343719  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:30:47.343734  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:30:47.343800  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:30:47.343845  890932 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:30:47.343860  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:30:47.343888  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:30:47.343924  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:30:47.343960  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:30:47.344029  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:47.344078  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.344096  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem -> /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.344112  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.344800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:30:47.365934  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:30:47.392004  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:30:47.412283  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:30:47.434592  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:30:47.452176  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:30:47.471245  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:30:47.489925  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:30:47.511686  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:30:47.530800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:30:47.549900  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:30:47.568360  890932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:30:47.581856  890932 ssh_runner.go:195] Run: openssl version
	I1208 00:30:47.588310  890932 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:30:47.588394  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.596457  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:30:47.604012  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607834  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607889  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607941  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.648743  890932 command_runner.go:130] > 3ec20f2e
	I1208 00:30:47.649210  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:30:47.656730  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.664307  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:30:47.671943  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.675995  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676036  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676087  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.716996  890932 command_runner.go:130] > b5213941
	I1208 00:30:47.717090  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:30:47.724719  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.732215  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:30:47.740036  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744030  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744106  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744186  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.784659  890932 command_runner.go:130] > 51391683
	I1208 00:30:47.785207  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:30:47.792679  890932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796767  890932 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796815  890932 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:30:47.796824  890932 command_runner.go:130] > Device: 259,1	Inode: 3390890     Links: 1
	I1208 00:30:47.796831  890932 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:47.796838  890932 command_runner.go:130] > Access: 2025-12-08 00:26:39.668848968 +0000
	I1208 00:30:47.796844  890932 command_runner.go:130] > Modify: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796849  890932 command_runner.go:130] > Change: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796854  890932 command_runner.go:130] >  Birth: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796956  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:30:47.837955  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.838424  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:30:47.879403  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.879847  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:30:47.921180  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.921679  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:30:47.962513  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.963017  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:30:48.007633  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.007748  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:30:48.052514  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.052941  890932 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:48.053033  890932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:30:48.053097  890932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:30:48.081438  890932 cri.go:89] found id: ""
	I1208 00:30:48.081565  890932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:30:48.089271  890932 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:30:48.089305  890932 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:30:48.089313  890932 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:30:48.093391  890932 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:30:48.093432  890932 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:30:48.093495  890932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:30:48.102864  890932 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:30:48.103337  890932 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.103450  890932 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "functional-386544" cluster setting kubeconfig missing "functional-386544" context setting]
	I1208 00:30:48.103819  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.104260  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.104413  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.105009  890932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:30:48.105030  890932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:30:48.105036  890932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:30:48.105041  890932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:30:48.105047  890932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:30:48.105105  890932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:30:48.105315  890932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:30:48.117774  890932 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:30:48.117857  890932 kubeadm.go:602] duration metric: took 24.417752ms to restartPrimaryControlPlane
	I1208 00:30:48.117881  890932 kubeadm.go:403] duration metric: took 64.945899ms to StartCluster
	I1208 00:30:48.117925  890932 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.118025  890932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.118797  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.119107  890932 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 00:30:48.119487  890932 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:30:48.119575  890932 addons.go:70] Setting storage-provisioner=true in profile "functional-386544"
	I1208 00:30:48.119600  890932 addons.go:239] Setting addon storage-provisioner=true in "functional-386544"
	I1208 00:30:48.119601  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:48.119630  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.120591  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.119636  890932 addons.go:70] Setting default-storageclass=true in profile "functional-386544"
	I1208 00:30:48.120910  890932 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-386544"
	I1208 00:30:48.121235  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.122185  890932 out.go:179] * Verifying Kubernetes components...
	I1208 00:30:48.124860  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:48.159125  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.159302  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.159592  890932 addons.go:239] Setting addon default-storageclass=true in "functional-386544"
	I1208 00:30:48.159620  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.160038  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.170516  890932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:30:48.173762  890932 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.173784  890932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:30:48.173857  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.210938  890932 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:48.210964  890932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:30:48.211031  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.228251  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.254642  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.338576  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:48.365732  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.388846  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.094190  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094240  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094289  890932 retry.go:31] will retry after 221.572731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094347  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094353  890932 retry.go:31] will retry after 127.29639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094558  890932 node_ready.go:35] waiting up to 6m0s for node "functional-386544" to be "Ready" ...
	I1208 00:30:49.094733  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.094831  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.222592  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.293397  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.293520  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.293548  890932 retry.go:31] will retry after 191.192714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.316617  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.385398  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.389149  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.389192  890932 retry.go:31] will retry after 221.019406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.485459  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.544915  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.548575  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.548650  890932 retry.go:31] will retry after 430.912171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.594843  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.595415  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.610614  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.669839  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.669884  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.669904  890932 retry.go:31] will retry after 602.088887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.980400  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:50.054076  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.057921  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.057957  890932 retry.go:31] will retry after 1.251170732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.095196  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.095305  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:50.273088  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:50.333799  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.333898  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.333941  890932 retry.go:31] will retry after 841.525831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.595581  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.595651  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.595949  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:51.095803  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.096238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:51.096319  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:51.176619  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:51.234663  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.238362  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.238405  890932 retry.go:31] will retry after 1.674228806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.309626  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:51.370041  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.373759  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.373793  890932 retry.go:31] will retry after 1.825797421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.595251  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.595336  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.595859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.095576  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.095656  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.096001  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.594894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.595585  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.912970  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:52.971340  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:52.975027  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:52.975063  890932 retry.go:31] will retry after 2.158822419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.095343  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.095426  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.095834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:53.200381  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:53.262558  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:53.262597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.262618  890932 retry.go:31] will retry after 2.117348765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.595941  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.596038  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.596315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:53.596377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:54.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.095321  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:54.595354  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.595475  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.097427  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.097684  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.097999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.134417  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:55.207147  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.207186  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.207211  890932 retry.go:31] will retry after 1.888454669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.380583  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:55.442228  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.442305  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.442354  890932 retry.go:31] will retry after 2.144073799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.595860  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.595937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.596276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:56.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.095041  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:56.095552  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:56.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.595189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.094913  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.094995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.096590  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:57.159346  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.159395  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.159419  890932 retry.go:31] will retry after 2.451052222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.586888  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:57.595329  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.595647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.595917  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.644195  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.648428  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.648466  890932 retry.go:31] will retry after 6.27239315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:58.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.095132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:58.595202  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.595277  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.595673  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:58.595737  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:59.095382  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.095474  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.095817  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.595497  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.595641  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.595962  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.611138  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:59.678142  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:59.678192  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:59.678217  890932 retry.go:31] will retry after 3.668002843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:00.095797  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.096216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:00.594886  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.594963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.595392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:01.095660  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.095757  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.096070  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:01.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:01.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.594889  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.595445  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.094968  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.595407  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.346685  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:03.431951  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.432026  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.432051  890932 retry.go:31] will retry after 7.871453146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.595808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.595982  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.596320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:03.596392  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:03.921995  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:03.979614  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.984229  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.984264  890932 retry.go:31] will retry after 6.338984785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:04.095500  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.095579  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.095881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:04.595749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.596230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.094969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.594775  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.595280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:06.094874  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:06.095343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:06.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.594931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.596121  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:07.095769  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.095852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.096129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:07.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.595312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.594744  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.594830  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.595101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:08.595154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:09.094875  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:09.594884  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.594974  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.095326  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.095417  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.095739  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.324305  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:10.384998  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:10.385051  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.385071  890932 retry.go:31] will retry after 7.782157506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.595548  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.595897  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:10.595950  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:11.095753  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.095835  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:11.304608  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:11.367180  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:11.367234  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.367256  890932 retry.go:31] will retry after 13.123466664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.595455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.595807  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.095614  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.095694  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.095989  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.594814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:13.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.094906  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:13.095366  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:13.595620  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.094918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.095230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.594881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.595183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.094943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.095289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.594876  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.595270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:15.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:16.094820  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.094894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.095164  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:16.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.594908  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.595244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.095054  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.095138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.095471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.595816  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.595940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.596241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:17.596293  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:18.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.094955  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:18.168028  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:18.232113  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:18.232150  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.232169  890932 retry.go:31] will retry after 8.094581729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.595690  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.595775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.596183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.095628  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.096011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.594718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.594802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:20.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.095232  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:20.095311  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:20.595598  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.595793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.596357  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.095040  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.594912  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.595011  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.595362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.094750  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.094826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.095087  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.594811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:22.595315  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:23.094999  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.095088  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.095463  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:23.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.595136  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.094856  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.490869  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:24.557459  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:24.557507  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.557527  890932 retry.go:31] will retry after 14.933128441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.595841  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.595922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.596313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:24.596367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:25.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.095113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:25.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.595217  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.094904  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.095360  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.327725  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:26.388171  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:26.388210  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.388230  890932 retry.go:31] will retry after 17.607962094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.595498  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.595632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.595892  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:27.095752  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.095851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.096189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:27.096258  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:27.594738  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.095672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.096073  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.595257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.595836  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.595984  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.596331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:29.596385  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:30.095156  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.095252  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.095627  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:30.595556  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.596442  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.094732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.094808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.095102  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.594886  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.595210  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:32.094828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.095216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:32.095266  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:32.595760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.595841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.596354  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.095264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.594878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:34.094814  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.095244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:34.095287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:34.594940  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.595021  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.595365  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.094942  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.095029  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.595132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:36.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.094904  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.095255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:36.095316  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:36.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.595276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.095623  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.095696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.095973  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.594749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.594850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.595227  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:38.094987  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.095112  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:38.095555  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:38.595474  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.595556  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.595831  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.095726  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.095806  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.096148  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.491741  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:39.568327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:39.568372  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.568394  890932 retry.go:31] will retry after 16.95217324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.595718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.596632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.597031  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:40.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.095785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:40.096128  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:40.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.595175  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.094806  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.094893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.095209  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.595791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.596479  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.094922  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.095018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.095545  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.595373  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.595463  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.596518  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:31:42.596581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:43.095290  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.095363  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.095661  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.595732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.595812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.596157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.996743  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:44.061795  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:44.065597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.065636  890932 retry.go:31] will retry after 36.030777087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.094709  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.095134  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:44.595619  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.595689  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.596188  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:45.095192  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.095284  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:45.095814  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:45.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.595664  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.596700  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:46.095471  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.095564  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.095854  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:46.595665  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.595741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.596605  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:47.095443  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.095832  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:47.095881  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:47.595397  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.595480  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.595753  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.095688  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.095797  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.096203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.594869  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:49.095593  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.095675  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.096008  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:49.096067  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:49.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.594865  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.094833  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.596966  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:51.095757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.095841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:51.096238  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:51.594921  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.595014  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.595361  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.094793  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.095231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.594828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.094823  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.594757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.594827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.595090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:53.595131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:54.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.095337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:54.595028  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.595111  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.595443  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.095154  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.095659  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.595576  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.595659  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.595995  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:55.596040  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:56.094916  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:56.520835  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:56.580569  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580606  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580706  890932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:31:56.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.595785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.596127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.094846  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.594959  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.595375  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:58.095719  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.095802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.096233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:58.096313  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:58.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.095006  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.095098  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.095434  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.594763  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.594848  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.595114  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.095594  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.595570  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.596011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:00.596082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:01.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.096010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.595794  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.596311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:02.596371  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:03.095057  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.095145  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.095500  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:03.595367  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.595442  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.095519  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.095642  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.096000  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.595726  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.595814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.596263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:05.095616  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.095688  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.095960  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:05.096006  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:05.595742  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.595817  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.595654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.596003  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:07.095781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.095861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.096199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:07.096254  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:07.594824  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.594910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.094781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.095147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.595140  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.595213  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.595560  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.095144  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.095234  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.095578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.595126  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.595198  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.595458  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:09.595499  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.095157  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.095251  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.095657  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.595220  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.595297  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.595648  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.095385  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.095455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.095752  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.595492  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.595574  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.595922  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:11.595978  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.095855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.594787  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.595182  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.094907  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.094987  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.095332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.595577  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:13.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:14.095571  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.095649  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.095941  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.595772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.595853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.596231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.094898  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.095334  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.595180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:16.095326  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:16.595008  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.595092  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.595453  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.094788  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.095049  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.595151  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.094845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.595685  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.595803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:18.596225  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:19.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.095319  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.594891  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.594970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.595320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.094805  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.095201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.097611  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:20.173666  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173721  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173816  890932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:32:20.177110  890932 out.go:179] * Enabled addons: 
	I1208 00:32:20.180584  890932 addons.go:530] duration metric: took 1m32.061097112s for enable addons: enabled=[]
	I1208 00:32:20.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.595353  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.095445  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.095926  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:21.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.596006  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.094730  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.094810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.095155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.095654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.095734  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.096034  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.096082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.595243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.094924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.595670  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.595754  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.095811  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.095896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.096308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.096381  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:25.594842  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.095626  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.095702  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.095977  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.595770  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.596206  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.095271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:27.595194  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.095355  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:28.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.595399  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.095084  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.595122  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.595197  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.595539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:29.595597  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:30.095160  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.095253  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.595339  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.595701  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.095621  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.095959  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.595634  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.596065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:31.596120  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:32.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.595231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.094941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.594792  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:34.095348  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:34.595041  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.595122  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.095733  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.096082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.594826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.595179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.595680  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.595807  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.596074  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:36.596131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:37.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.094901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.095188  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.095665  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.594842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.595165  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.095320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:39.095377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:39.594776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.595118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.095374  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.595107  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.595184  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.595524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.094737  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.094813  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.594877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:41.595246  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.095448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.595773  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.595847  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.094911  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.095007  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.095748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.594832  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:32:43.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.095588  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.095673  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.095930  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.595731  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.595809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.094947  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.095058  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.095377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.595628  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.595984  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.095846  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.095977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.096455  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.096521  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:46.595187  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.595275  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.595599  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.095341  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.095628  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.595415  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.595489  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.095646  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.595037  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.595138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.595519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:48.595573  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.094988  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.095539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.595369  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.595785  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.095604  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.095687  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.595195  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.095341  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.595377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.096075  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.096116  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:53.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.095368  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.095450  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.095808  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.595724  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.596055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.094748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.094827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.595264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.095731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.096035  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.594732  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.594809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.094761  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.094840  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.595660  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.595745  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.596013  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:57.596066  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.094869  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.095204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.595193  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.595274  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.595658  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.095380  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.095459  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.095744  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.595525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.595604  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.595972  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.094758  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.095388  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.595177  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.594943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.595298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.094875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.594959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.595287  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.595345  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:03.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.095115  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.595322  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.595402  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.095474  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.095555  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.095896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.595687  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.595771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.596109  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.596164  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:05.095698  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.095821  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.096157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.594941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.595621  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.595983  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.095871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.096215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:07.096271  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.094792  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.094883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.095200  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.594923  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.094834  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.094921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.095276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.595681  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.595759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.596030  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:09.596071  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:10.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.095180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.595171  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.094783  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.095169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.594859  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:12.095394  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:12.595611  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.595686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.595968  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.095748  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.095829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.594964  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.595409  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.095085  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.095492  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:14.095548  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:14.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.094998  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.095079  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.095428  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.595113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.095424  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.595117  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.595199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:16.595612  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.095280  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.095347  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.095662  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.595238  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.595324  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.595678  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.095532  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.095611  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.095982  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.595098  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.094863  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.095323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.095387  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.095680  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.594771  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.094914  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.095000  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.095330  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.594854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.595147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.595196  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.094961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.594844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.594926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.095693  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.095771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.595065  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.595151  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:23.595587  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.095271  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.095360  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.595131  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.595202  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.595547  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.095305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.595099  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.095122  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.095199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.095487  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.095533  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.594948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.095043  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.095128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.595145  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.595211  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.595478  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.095182  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.095261  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.095626  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.095683  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:28.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.595718  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.596082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.096085  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.595201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.095390  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.595110  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.595152  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.094935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.594832  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.594907  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.595282  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.595155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.094850  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.594897  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.594986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.595405  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.595460  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.094996  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.095074  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.095402  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.595458  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.096170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.595656  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.596020  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:36.596064  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.095648  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.095004  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.095492  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.595152  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.595232  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.595511  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.095291  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.594994  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.595449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.095710  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.095987  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.096029  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:41.595774  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.595854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.094945  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.095447  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.594810  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.594880  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.095281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.595149  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.595226  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.595578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:43.595639  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.095775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.096055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.594909  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.595246  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.595704  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.595779  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:45.596188  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.095300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.094796  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.094870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.095149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.594854  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.095000  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.095085  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.095460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.095511  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:48.595390  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.595476  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.595748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.095572  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.095647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.095999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.595785  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.596224  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.095203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.594890  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.595313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:50.595368  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.094876  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.095346  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.595652  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.095805  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.096192  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.594926  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.595378  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:52.595433  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.094786  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.094881  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.094965  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.595582  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.595660  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.595948  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:54.595991  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:55.095773  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.095890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.096222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.095686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.095975  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.595757  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.595833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.596223  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:56.596285  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:57.094855  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.095265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.595601  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.595670  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.595954  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.095735  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.095811  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.096159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.594840  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.594919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.095680  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.095963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:59.096015  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:59.595776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.595860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.596187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.094948  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.095044  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.594807  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.595187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.594909  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.595385  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:01.595446  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:02.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.095145  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.594938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.095022  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.095477  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.595437  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.595711  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:03.595753  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:04.095511  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.095589  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.095964  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.595805  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.595893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.596256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.094892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.095280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.595525  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:06.095367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:06.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.595230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.095222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.594993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.595353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.095670  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.095741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:08.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:08.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.595235  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.094980  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.595609  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.595691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.595986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.594928  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.595018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.595327  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:10.595376  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:11.094812  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.094900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.595130  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.094818  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.094897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:13.095308  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:13.594998  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.595102  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.595450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.095713  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.095782  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.595722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.595804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.094925  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.095362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:15.095419  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:15.595024  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.595091  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.595369  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.094880  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.595018  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.595096  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.595400  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.095070  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.095425  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:17.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:17.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.594950  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.595419  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.094891  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.094971  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.595365  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.595444  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.595738  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.095230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.095306  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.095655  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:19.095709  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.595477  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.595895  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.095714  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.096185  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.595647  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.595727  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.596033  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:21.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:22.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.594975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.595067  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.595475  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.094995  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.095075  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.095444  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.095501  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:24.595780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.595858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.596186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.594923  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.595001  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.595307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:26.595299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.094975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.595139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.095238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.595230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.595311  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.595664  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:28.595719  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:29.095432  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.095508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.095787  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.595520  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.595592  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.595939  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.095902  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.096081  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.096554  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.595239  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.595307  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.595584  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.095490  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.095572  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.095910  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:31.095974  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:31.595725  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.595808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.095639  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.095709  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.595814  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.595902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.596323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.094963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.095317  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.595336  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.595409  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:33.595717  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.095551  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.095629  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.095979  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.595790  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.595304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.094956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.095379  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:36.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.595663  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.595963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.095729  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.095810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.096161  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.594967  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.096015  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.096058  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:38.595114  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.595196  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.595553  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.095363  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.095447  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.095793  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.595427  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.595881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.095722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.095803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.096152  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:40.096206  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:40.594823  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.094864  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.594936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.095093  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.095185  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.095576  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.595787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.596105  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:42.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.094873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.594881  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.595259  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.594969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.595448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.094892  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.095006  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.095381  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.095448  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:45.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.595342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.094959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.095359  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.594941  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.595022  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.094744  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.094819  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.095101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.594892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:47.595343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.095015  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.095449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.595544  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.595623  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.595896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.095113  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.095208  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.095687  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.595013  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.595100  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.595440  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:49.595488  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.095418  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.095709  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.595564  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.595644  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.094776  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.094860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.095248  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.595659  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.596066  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:51.596155  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:52.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.595055  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.595136  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.595494  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.094787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.095068  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.595063  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.595142  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.595460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:54.095354  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:54.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.595721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.596067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.094878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.095629  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.095704  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.096027  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:56.096075  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:56.594762  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.594845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.094921  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.095004  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.595149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.595381  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.595460  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.595812  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:58.595863  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:59.095337  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.095413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.095693  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.595446  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.595531  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.595875  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.095754  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.096197  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.595251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.095251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:01.095307  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:01.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.594896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.595215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.094762  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.094836  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.095122  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.094822  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.094902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.595638  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.595707  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.595996  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:03.596048  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:04.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.095254  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.594969  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.595053  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.595398  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.095017  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.095353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.095490  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:06.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.594852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.595138  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.095278  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.595328  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.595759  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.095454  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.095798  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:08.095845  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:08.594864  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.595176  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.094991  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.095347  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.595069  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.595526  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:10.595586  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:11.095553  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.095640  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.095940  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.094800  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.094884  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.594901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.595178  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.094902  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.094979  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:13.095363  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:13.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.595301  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.595667  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.095197  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.095270  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.095550  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.595257  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.595339  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.595725  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.095585  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.096126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:15.096187  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.594862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.595129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.094866  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.595015  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.094867  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.594835  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.595233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:17.595290  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:18.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.595435  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.595508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.595780  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.096078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.595160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.094736  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.094818  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.095118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:20.095178  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:20.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.095027  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.094877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.095219  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:22.095276  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:22.594927  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.595337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.094807  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.094885  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.594870  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.595296  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:24.095323  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.596024  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.095316  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.594932  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.595268  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.095967  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:26.096009  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.594774  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.595220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.095279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.595172  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.594890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:28.595297  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:29.095621  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.594756  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.594833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.595168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.094972  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.095501  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.594791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.094879  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:31.095357  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:31.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.094855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.594962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.595305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.095132  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.095524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:33.095581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.594888  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.594964  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.595242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.095392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.594946  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.595024  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.595376  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.095094  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.095178  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.095522  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.594940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.595291  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:36.094854  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.095261  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.594784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.594860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.595344  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:38.095615  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.095691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.095993  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.094849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.594893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.595159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.094832  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.094914  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:40.095383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.595050  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.595133  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.095165  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.095247  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.595432  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.595533  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.595908  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.095710  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.096304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.096383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.594857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.595136  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.595212  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.595549  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.095717  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.095796  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.096072  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.595340  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.595058  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.595128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.595471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.095186  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.095266  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.595402  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.595481  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.595824  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.595879  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.595618  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.595696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.095799  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.096202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.595337  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.595413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.595706  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.095444  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.095524  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.095902  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.095961  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.595540  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.595625  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.595976  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.095709  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.096095  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.594799  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.094959  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.095064  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.095433  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.594801  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.595287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.094975  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.095331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.595042  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.595124  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.595480  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.095139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.594836  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.595282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.595384  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.094872  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.095335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.595658  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.596021  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.094747  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.094842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.095674  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.096062  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.096108  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.594963  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.595039  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.595371  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.094934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.594883  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.594996  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.595394  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.095103  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.095186  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.095515  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.596169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.596227  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:59.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.096039  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.594804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.595133  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.094929  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.095015  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.095342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.595183  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.595265  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.595623  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.095438  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.095859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:01.095916  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.595708  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.596080  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.594816  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.595265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.594768  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:03.595263  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:04.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.594813  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.594897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.095720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.595766  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.596299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:06.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.095304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.595639  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.595720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.596054  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.094760  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.094857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.594898  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.594972  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.595325  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.095635  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.095713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.095986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:08.096028  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:08.595142  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.595227  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.595555  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.095287  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.095364  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.095690  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.595392  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.595461  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.095517  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.595700  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.595784  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:10.596216  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:11.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.094850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.595266  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.094980  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.095061  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.095386  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.595126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.094912  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:13.095347  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:13.595000  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.595104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.595437  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.095097  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.095172  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.095450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.595177  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.595281  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.595679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.095515  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.095616  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:15.096068  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:15.595565  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.595677  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.595994  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.094724  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.094815  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.095174  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.594934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.594829  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:17.595272  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:18.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.594793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.595065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.094767  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.595263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.595322  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:20.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.096099  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.594887  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.094954  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.095363  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.595677  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.595750  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.596077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.596147  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:22.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.094938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.095256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.095643  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.095723  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.096019  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.595048  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.595147  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.595567  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.095396  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.095478  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:24.095979  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:24.595457  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.595529  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.095586  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.095668  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.595744  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.595838  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.596274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.095659  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.096092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:26.096144  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:26.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.094928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.095252  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.594876  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.595170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.595430  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.595768  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:28.595815  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:29.095242  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.095310  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.095629  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.595200  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.595280  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.595637  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.095238  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.095745  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.595479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.595834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.595883  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:31.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.095759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.096119  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.594834  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.594916  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.595238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.094743  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.094812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.095077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.595202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.094957  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.095413  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:33.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.595484  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.595560  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.595823  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.095676  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.095765  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.096127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.094778  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.594899  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.594983  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.595332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.595390  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:36.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.095205  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.095569  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.595347  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.595414  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.095479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.095557  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.095923  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.596092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.596146  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:38.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.095685  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.095965  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.595156  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.595538  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.095256  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.095679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.595429  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.595772  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.095636  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.095721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.096088  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:40.096154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.594798  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.094802  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.095218  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.594911  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.594990  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.094885  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.094978  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.095379  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.595088  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.595162  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.595436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.595481  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:43.094831  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.095253  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.595271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.095602  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.095672  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.595789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.595878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.596229  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.596286  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:45.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.095095  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.095519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.594961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.595315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.594993  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.595451  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.094789  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.095057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:47.095099  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.594746  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.595163  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.095765  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.096257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.594789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.094763  890932 type.go:168] "Request Body" body=""
	I1208 00:36:49.094842  890932 node_ready.go:38] duration metric: took 6m0.000209264s for node "functional-386544" to be "Ready" ...
	I1208 00:36:49.097838  890932 out.go:203] 
	W1208 00:36:49.100712  890932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:36:49.100735  890932 out.go:285] * 
	W1208 00:36:49.102896  890932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:36:49.105576  890932 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038651896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038674026Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038731429Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038749251Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038760107Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038773202Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038782999Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038794954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038811882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.038846729Z" level=info msg="Connect containerd service"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.039475923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.040148999Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051184448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051253471Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051289894Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.051318284Z" level=info msg="Start recovering state"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097113159Z" level=info msg="Start event monitor"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097165779Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097179465Z" level=info msg="Start streaming server"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097189672Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097198969Z" level=info msg="runtime interface starting up..."
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097205484Z" level=info msg="starting plugins..."
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097218112Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:30:46 functional-386544 containerd[5240]: time="2025-12-08T00:30:46.097506723Z" level=info msg="containerd successfully booted in 0.085148s"
	Dec 08 00:30:46 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:36:53.226403    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:53.227132    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:53.228786    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:53.229351    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:36:53.231004    8649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:36:53 up  5:19,  0 user,  load average: 0.52, 0.44, 1.07
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 08 00:36:50 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:50 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:50 functional-386544 kubelet[8503]: E1208 00:36:50.902254    8503 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:50 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:51 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 08 00:36:51 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:51 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:51 functional-386544 kubelet[8523]: E1208 00:36:51.666966    8523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:51 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:51 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:52 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 08 00:36:52 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:52 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:52 functional-386544 kubelet[8559]: E1208 00:36:52.402325    8559 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:52 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:52 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:53 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 815.
	Dec 08 00:36:53 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:53 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:53 functional-386544 kubelet[8635]: E1208 00:36:53.155537    8635 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:53 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:53 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (383.892851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 kubectl -- --context functional-386544 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 kubectl -- --context functional-386544 get pods: exit status 1 (132.883574ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-386544 kubectl -- --context functional-386544 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (345.423188ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 logs -n 25: (1.000128245s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-932121 image ls --format short --alsologtostderr                                                                                             │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format yaml --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format json --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format table --alsologtostderr                                                                                             │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh     │ functional-932121 ssh pgrep buildkitd                                                                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image   │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                  │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls                                                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete  │ -p functional-932121                                                                                                                                    │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start   │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ start   │ -p functional-386544 --alsologtostderr -v=8                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:30 UTC │                     │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:latest                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add minikube-local-cache-test:functional-386544                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache delete minikube-local-cache-test:functional-386544                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl images                                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │                     │
	│ cache   │ functional-386544 cache reload                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:37 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ kubectl │ functional-386544 kubectl -- --context functional-386544 get pods                                                                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:30:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:30:43.106195  890932 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:30:43.106412  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106440  890932 out.go:374] Setting ErrFile to fd 2...
	I1208 00:30:43.106489  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106802  890932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:30:43.107327  890932 out.go:368] Setting JSON to false
	I1208 00:30:43.108252  890932 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18796,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:30:43.108353  890932 start.go:143] virtualization:  
	I1208 00:30:43.111927  890932 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:30:43.114895  890932 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:30:43.114974  890932 notify.go:221] Checking for updates...
	I1208 00:30:43.121042  890932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:30:43.124118  890932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:43.127146  890932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:30:43.130017  890932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:30:43.132953  890932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:30:43.136385  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:43.136518  890932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:30:43.171722  890932 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:30:43.171844  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.232988  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.222800102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.233101  890932 docker.go:319] overlay module found
	I1208 00:30:43.236209  890932 out.go:179] * Using the docker driver based on existing profile
	I1208 00:30:43.239024  890932 start.go:309] selected driver: docker
	I1208 00:30:43.239046  890932 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.240193  890932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:30:43.240306  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.299458  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.288388391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.299888  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:43.299955  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:43.300012  890932 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.303163  890932 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:30:43.305985  890932 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:30:43.309025  890932 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:30:43.312042  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:43.312102  890932 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:30:43.312113  890932 cache.go:65] Caching tarball of preloaded images
	I1208 00:30:43.312160  890932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:30:43.312254  890932 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:30:43.312266  890932 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:30:43.312379  890932 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:30:43.332475  890932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:30:43.332500  890932 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:30:43.332516  890932 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:30:43.332550  890932 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:30:43.332614  890932 start.go:364] duration metric: took 40.517µs to acquireMachinesLock for "functional-386544"
	I1208 00:30:43.332637  890932 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:30:43.332643  890932 fix.go:54] fixHost starting: 
	I1208 00:30:43.332918  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:43.364362  890932 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:30:43.364391  890932 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:30:43.367522  890932 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:30:43.367561  890932 machine.go:94] provisionDockerMachine start ...
	I1208 00:30:43.367667  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.390594  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.390943  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.390953  890932 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:30:43.546039  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.546064  890932 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:30:43.546132  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.563909  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.564221  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.564240  890932 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:30:43.728055  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.728136  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.746428  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.746778  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.746805  890932 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:30:43.898980  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:30:43.899007  890932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:30:43.899068  890932 ubuntu.go:190] setting up certificates
	I1208 00:30:43.899078  890932 provision.go:84] configureAuth start
	I1208 00:30:43.899155  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:43.917225  890932 provision.go:143] copyHostCerts
	I1208 00:30:43.917271  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917317  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:30:43.917335  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917414  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:30:43.917515  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917537  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:30:43.917547  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917575  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:30:43.917632  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917656  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:30:43.917664  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917691  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:30:43.917796  890932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:30:44.201729  890932 provision.go:177] copyRemoteCerts
	I1208 00:30:44.201799  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:30:44.201847  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.218852  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.326622  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:30:44.326687  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:30:44.345138  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:30:44.345250  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:30:44.363475  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:30:44.363575  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:30:44.382571  890932 provision.go:87] duration metric: took 483.468304ms to configureAuth
	I1208 00:30:44.382643  890932 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:30:44.382843  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:44.382857  890932 machine.go:97] duration metric: took 1.015288541s to provisionDockerMachine
	I1208 00:30:44.382865  890932 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:30:44.382880  890932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:30:44.382939  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:30:44.382987  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.401380  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.506846  890932 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:30:44.510586  890932 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:30:44.510612  890932 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:30:44.510623  890932 command_runner.go:130] > VERSION_ID="12"
	I1208 00:30:44.510628  890932 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:30:44.510633  890932 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:30:44.510637  890932 command_runner.go:130] > ID=debian
	I1208 00:30:44.510641  890932 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:30:44.510646  890932 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:30:44.510652  890932 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:30:44.510734  890932 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:30:44.510755  890932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:30:44.510768  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:30:44.510833  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:30:44.510921  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:30:44.510932  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /etc/ssl/certs/8467112.pem
	I1208 00:30:44.511028  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:30:44.511037  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> /etc/test/nested/copy/846711/hosts
	I1208 00:30:44.511082  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:30:44.518977  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:44.538494  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:30:44.556928  890932 start.go:296] duration metric: took 174.046033ms for postStartSetup
	I1208 00:30:44.557012  890932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:30:44.557057  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.579278  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.683552  890932 command_runner.go:130] > 11%
	I1208 00:30:44.683622  890932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:30:44.688016  890932 command_runner.go:130] > 174G
	I1208 00:30:44.688056  890932 fix.go:56] duration metric: took 1.355411206s for fixHost
	I1208 00:30:44.688067  890932 start.go:83] releasing machines lock for "functional-386544", held for 1.355443108s
	I1208 00:30:44.688146  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:44.705277  890932 ssh_runner.go:195] Run: cat /version.json
	I1208 00:30:44.705345  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.705617  890932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:30:44.705687  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.723084  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.728238  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.826153  890932 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:30:44.826300  890932 ssh_runner.go:195] Run: systemctl --version
	I1208 00:30:44.917784  890932 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:30:44.920412  890932 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:30:44.920484  890932 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:30:44.920574  890932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:30:44.924900  890932 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:30:44.925095  890932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:30:44.925215  890932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:30:44.933474  890932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:30:44.933497  890932 start.go:496] detecting cgroup driver to use...
	I1208 00:30:44.933530  890932 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:30:44.933580  890932 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:30:44.950010  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:30:44.963687  890932 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:30:44.963783  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:30:44.980391  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:30:44.994304  890932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:30:45.255981  890932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:30:45.407305  890932 docker.go:234] disabling docker service ...
	I1208 00:30:45.407423  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:30:45.423468  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:30:45.437222  890932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:30:45.561603  890932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:30:45.705878  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:30:45.719726  890932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:30:45.733506  890932 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1208 00:30:45.735147  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:30:45.744694  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:30:45.753960  890932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:30:45.754081  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:30:45.763511  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.772723  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:30:45.781584  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.790600  890932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:30:45.799135  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:30:45.808317  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:30:45.817244  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:30:45.826211  890932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:30:45.833037  890932 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:30:45.834008  890932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:30:45.841603  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:45.965344  890932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:30:46.100261  890932 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:30:46.100385  890932 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:30:46.104210  890932 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1208 00:30:46.104295  890932 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:30:46.104358  890932 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1208 00:30:46.104385  890932 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:46.104410  890932 command_runner.go:130] > Access: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104446  890932 command_runner.go:130] > Modify: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104470  890932 command_runner.go:130] > Change: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104490  890932 command_runner.go:130] >  Birth: -
	I1208 00:30:46.104859  890932 start.go:564] Will wait 60s for crictl version
	I1208 00:30:46.104961  890932 ssh_runner.go:195] Run: which crictl
	I1208 00:30:46.108543  890932 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:30:46.108924  890932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:30:46.136367  890932 command_runner.go:130] > Version:  0.1.0
	I1208 00:30:46.136449  890932 command_runner.go:130] > RuntimeName:  containerd
	I1208 00:30:46.136470  890932 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1208 00:30:46.136491  890932 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:30:46.136542  890932 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:30:46.136636  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.156742  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.159302  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.181269  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.189080  890932 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:30:46.192076  890932 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:30:46.209081  890932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:30:46.212923  890932 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:30:46.213097  890932 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:30:46.213209  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:46.213289  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.236482  890932 command_runner.go:130] > {
	I1208 00:30:46.236506  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.236511  890932 command_runner.go:130] >     {
	I1208 00:30:46.236520  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.236526  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236531  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.236534  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236538  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236551  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.236558  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236563  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.236571  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236576  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236586  890932 command_runner.go:130] >     },
	I1208 00:30:46.236590  890932 command_runner.go:130] >     {
	I1208 00:30:46.236601  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.236605  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236610  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.236617  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236622  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236632  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.236641  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236646  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.236649  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236654  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236664  890932 command_runner.go:130] >     },
	I1208 00:30:46.236668  890932 command_runner.go:130] >     {
	I1208 00:30:46.236675  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.236679  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236687  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.236690  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236699  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236718  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.236722  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236728  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.236733  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.236740  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236743  890932 command_runner.go:130] >     },
	I1208 00:30:46.236746  890932 command_runner.go:130] >     {
	I1208 00:30:46.236753  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.236760  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236766  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.236769  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236773  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236781  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.236788  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236792  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.236803  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236808  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236814  890932 command_runner.go:130] >       },
	I1208 00:30:46.236821  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236825  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236828  890932 command_runner.go:130] >     },
	I1208 00:30:46.236832  890932 command_runner.go:130] >     {
	I1208 00:30:46.236841  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.236847  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236853  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.236856  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236860  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236868  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.236874  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236879  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.236885  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236894  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236901  890932 command_runner.go:130] >       },
	I1208 00:30:46.236908  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236912  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236916  890932 command_runner.go:130] >     },
	I1208 00:30:46.236926  890932 command_runner.go:130] >     {
	I1208 00:30:46.236933  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.236937  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236943  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.236947  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236951  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236962  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.236968  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236972  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.236976  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236980  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236989  890932 command_runner.go:130] >       },
	I1208 00:30:46.236993  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236997  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237002  890932 command_runner.go:130] >     },
	I1208 00:30:46.237005  890932 command_runner.go:130] >     {
	I1208 00:30:46.237012  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.237017  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237024  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.237027  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237032  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237040  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.237047  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237055  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.237059  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237063  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237066  890932 command_runner.go:130] >     },
	I1208 00:30:46.237076  890932 command_runner.go:130] >     {
	I1208 00:30:46.237084  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.237095  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237104  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.237107  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237112  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237120  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.237126  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237131  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.237134  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237139  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.237142  890932 command_runner.go:130] >       },
	I1208 00:30:46.237146  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237153  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237157  890932 command_runner.go:130] >     },
	I1208 00:30:46.237166  890932 command_runner.go:130] >     {
	I1208 00:30:46.237173  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.237178  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237182  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.237189  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237193  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237201  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.237206  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237210  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.237216  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237221  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.237227  890932 command_runner.go:130] >       },
	I1208 00:30:46.237231  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237235  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.237238  890932 command_runner.go:130] >     }
	I1208 00:30:46.237241  890932 command_runner.go:130] >   ]
	I1208 00:30:46.237244  890932 command_runner.go:130] > }
	I1208 00:30:46.239834  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.239857  890932 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:30:46.239919  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.262227  890932 command_runner.go:130] > {
	I1208 00:30:46.262250  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.262255  890932 command_runner.go:130] >     {
	I1208 00:30:46.262265  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.262280  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262286  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.262289  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262293  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262303  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.262310  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262315  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.262319  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262323  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262326  890932 command_runner.go:130] >     },
	I1208 00:30:46.262330  890932 command_runner.go:130] >     {
	I1208 00:30:46.262348  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.262357  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262363  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.262366  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262370  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262381  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.262386  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262392  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.262396  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262400  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262403  890932 command_runner.go:130] >     },
	I1208 00:30:46.262406  890932 command_runner.go:130] >     {
	I1208 00:30:46.262413  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.262427  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262439  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.262476  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262489  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262498  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.262502  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262506  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.262513  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.262517  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262524  890932 command_runner.go:130] >     },
	I1208 00:30:46.262531  890932 command_runner.go:130] >     {
	I1208 00:30:46.262539  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.262542  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262548  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.262553  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262557  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262565  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.262568  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262572  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.262579  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262583  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262588  890932 command_runner.go:130] >       },
	I1208 00:30:46.262592  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262605  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262609  890932 command_runner.go:130] >     },
	I1208 00:30:46.262612  890932 command_runner.go:130] >     {
	I1208 00:30:46.262619  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.262625  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262631  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.262634  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262638  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262646  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.262649  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262654  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.262660  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262678  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262686  890932 command_runner.go:130] >       },
	I1208 00:30:46.262690  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262694  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262697  890932 command_runner.go:130] >     },
	I1208 00:30:46.262701  890932 command_runner.go:130] >     {
	I1208 00:30:46.262707  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.262718  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262724  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.262727  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262731  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262739  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.262745  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262749  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.262755  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262759  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262772  890932 command_runner.go:130] >       },
	I1208 00:30:46.262776  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262780  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262783  890932 command_runner.go:130] >     },
	I1208 00:30:46.262786  890932 command_runner.go:130] >     {
	I1208 00:30:46.262793  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.262800  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262805  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.262809  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262812  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262819  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.262823  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262827  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.262834  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262838  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262844  890932 command_runner.go:130] >     },
	I1208 00:30:46.262848  890932 command_runner.go:130] >     {
	I1208 00:30:46.262857  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.262867  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262876  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.262882  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262886  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262893  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.262907  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262915  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.262919  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262922  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262929  890932 command_runner.go:130] >       },
	I1208 00:30:46.262933  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262943  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262947  890932 command_runner.go:130] >     },
	I1208 00:30:46.262950  890932 command_runner.go:130] >     {
	I1208 00:30:46.262957  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.262963  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262968  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.262971  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262975  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262982  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.262985  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262990  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.262996  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.263000  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.263013  890932 command_runner.go:130] >       },
	I1208 00:30:46.263017  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.263021  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.263024  890932 command_runner.go:130] >     }
	I1208 00:30:46.263027  890932 command_runner.go:130] >   ]
	I1208 00:30:46.263031  890932 command_runner.go:130] > }
	I1208 00:30:46.265493  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.265517  890932 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:30:46.265524  890932 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:30:46.265625  890932 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:30:46.265699  890932 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:30:46.291229  890932 command_runner.go:130] > {
	I1208 00:30:46.291250  890932 command_runner.go:130] >   "cniconfig": {
	I1208 00:30:46.291256  890932 command_runner.go:130] >     "Networks": [
	I1208 00:30:46.291260  890932 command_runner.go:130] >       {
	I1208 00:30:46.291266  890932 command_runner.go:130] >         "Config": {
	I1208 00:30:46.291271  890932 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1208 00:30:46.291283  890932 command_runner.go:130] >           "Name": "cni-loopback",
	I1208 00:30:46.291288  890932 command_runner.go:130] >           "Plugins": [
	I1208 00:30:46.291292  890932 command_runner.go:130] >             {
	I1208 00:30:46.291297  890932 command_runner.go:130] >               "Network": {
	I1208 00:30:46.291301  890932 command_runner.go:130] >                 "ipam": {},
	I1208 00:30:46.291307  890932 command_runner.go:130] >                 "type": "loopback"
	I1208 00:30:46.291311  890932 command_runner.go:130] >               },
	I1208 00:30:46.291322  890932 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1208 00:30:46.291326  890932 command_runner.go:130] >             }
	I1208 00:30:46.291334  890932 command_runner.go:130] >           ],
	I1208 00:30:46.291344  890932 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1208 00:30:46.291348  890932 command_runner.go:130] >         },
	I1208 00:30:46.291356  890932 command_runner.go:130] >         "IFName": "lo"
	I1208 00:30:46.291362  890932 command_runner.go:130] >       }
	I1208 00:30:46.291366  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291371  890932 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1208 00:30:46.291375  890932 command_runner.go:130] >     "PluginDirs": [
	I1208 00:30:46.291379  890932 command_runner.go:130] >       "/opt/cni/bin"
	I1208 00:30:46.291390  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291395  890932 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1208 00:30:46.291398  890932 command_runner.go:130] >     "Prefix": "eth"
	I1208 00:30:46.291402  890932 command_runner.go:130] >   },
	I1208 00:30:46.291411  890932 command_runner.go:130] >   "config": {
	I1208 00:30:46.291415  890932 command_runner.go:130] >     "cdiSpecDirs": [
	I1208 00:30:46.291419  890932 command_runner.go:130] >       "/etc/cdi",
	I1208 00:30:46.291427  890932 command_runner.go:130] >       "/var/run/cdi"
	I1208 00:30:46.291432  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291436  890932 command_runner.go:130] >     "cni": {
	I1208 00:30:46.291448  890932 command_runner.go:130] >       "binDir": "",
	I1208 00:30:46.291453  890932 command_runner.go:130] >       "binDirs": [
	I1208 00:30:46.291457  890932 command_runner.go:130] >         "/opt/cni/bin"
	I1208 00:30:46.291460  890932 command_runner.go:130] >       ],
	I1208 00:30:46.291464  890932 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1208 00:30:46.291468  890932 command_runner.go:130] >       "confTemplate": "",
	I1208 00:30:46.291472  890932 command_runner.go:130] >       "ipPref": "",
	I1208 00:30:46.291475  890932 command_runner.go:130] >       "maxConfNum": 1,
	I1208 00:30:46.291479  890932 command_runner.go:130] >       "setupSerially": false,
	I1208 00:30:46.291483  890932 command_runner.go:130] >       "useInternalLoopback": false
	I1208 00:30:46.291487  890932 command_runner.go:130] >     },
	I1208 00:30:46.291492  890932 command_runner.go:130] >     "containerd": {
	I1208 00:30:46.291499  890932 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1208 00:30:46.291504  890932 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1208 00:30:46.291509  890932 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1208 00:30:46.291515  890932 command_runner.go:130] >       "runtimes": {
	I1208 00:30:46.291519  890932 command_runner.go:130] >         "runc": {
	I1208 00:30:46.291527  890932 command_runner.go:130] >           "ContainerAnnotations": null,
	I1208 00:30:46.291533  890932 command_runner.go:130] >           "PodAnnotations": null,
	I1208 00:30:46.291545  890932 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1208 00:30:46.291550  890932 command_runner.go:130] >           "cgroupWritable": false,
	I1208 00:30:46.291554  890932 command_runner.go:130] >           "cniConfDir": "",
	I1208 00:30:46.291558  890932 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1208 00:30:46.291564  890932 command_runner.go:130] >           "io_type": "",
	I1208 00:30:46.291568  890932 command_runner.go:130] >           "options": {
	I1208 00:30:46.291576  890932 command_runner.go:130] >             "BinaryName": "",
	I1208 00:30:46.291580  890932 command_runner.go:130] >             "CriuImagePath": "",
	I1208 00:30:46.291588  890932 command_runner.go:130] >             "CriuWorkPath": "",
	I1208 00:30:46.291593  890932 command_runner.go:130] >             "IoGid": 0,
	I1208 00:30:46.291599  890932 command_runner.go:130] >             "IoUid": 0,
	I1208 00:30:46.291604  890932 command_runner.go:130] >             "NoNewKeyring": false,
	I1208 00:30:46.291615  890932 command_runner.go:130] >             "Root": "",
	I1208 00:30:46.291619  890932 command_runner.go:130] >             "ShimCgroup": "",
	I1208 00:30:46.291624  890932 command_runner.go:130] >             "SystemdCgroup": false
	I1208 00:30:46.291627  890932 command_runner.go:130] >           },
	I1208 00:30:46.291641  890932 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1208 00:30:46.291648  890932 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1208 00:30:46.291655  890932 command_runner.go:130] >           "runtimePath": "",
	I1208 00:30:46.291660  890932 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1208 00:30:46.291664  890932 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1208 00:30:46.291668  890932 command_runner.go:130] >           "snapshotter": ""
	I1208 00:30:46.291672  890932 command_runner.go:130] >         }
	I1208 00:30:46.291675  890932 command_runner.go:130] >       }
	I1208 00:30:46.291678  890932 command_runner.go:130] >     },
	I1208 00:30:46.291689  890932 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1208 00:30:46.291698  890932 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1208 00:30:46.291705  890932 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1208 00:30:46.291709  890932 command_runner.go:130] >     "disableApparmor": false,
	I1208 00:30:46.291714  890932 command_runner.go:130] >     "disableHugetlbController": true,
	I1208 00:30:46.291721  890932 command_runner.go:130] >     "disableProcMount": false,
	I1208 00:30:46.291726  890932 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1208 00:30:46.291730  890932 command_runner.go:130] >     "enableCDI": true,
	I1208 00:30:46.291740  890932 command_runner.go:130] >     "enableSelinux": false,
	I1208 00:30:46.291745  890932 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1208 00:30:46.291749  890932 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1208 00:30:46.291753  890932 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1208 00:30:46.291758  890932 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1208 00:30:46.291763  890932 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1208 00:30:46.291770  890932 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1208 00:30:46.291775  890932 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1208 00:30:46.291789  890932 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291798  890932 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1208 00:30:46.291803  890932 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291810  890932 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1208 00:30:46.291819  890932 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1208 00:30:46.291823  890932 command_runner.go:130] >   },
	I1208 00:30:46.291827  890932 command_runner.go:130] >   "features": {
	I1208 00:30:46.291831  890932 command_runner.go:130] >     "supplemental_groups_policy": true
	I1208 00:30:46.291835  890932 command_runner.go:130] >   },
	I1208 00:30:46.291839  890932 command_runner.go:130] >   "golang": "go1.24.9",
	I1208 00:30:46.291850  890932 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291862  890932 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291866  890932 command_runner.go:130] >   "runtimeHandlers": [
	I1208 00:30:46.291870  890932 command_runner.go:130] >     {
	I1208 00:30:46.291874  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291886  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291890  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291893  890932 command_runner.go:130] >       }
	I1208 00:30:46.291897  890932 command_runner.go:130] >     },
	I1208 00:30:46.291907  890932 command_runner.go:130] >     {
	I1208 00:30:46.291911  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291916  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291919  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291922  890932 command_runner.go:130] >       },
	I1208 00:30:46.291926  890932 command_runner.go:130] >       "name": "runc"
	I1208 00:30:46.291930  890932 command_runner.go:130] >     }
	I1208 00:30:46.291939  890932 command_runner.go:130] >   ],
	I1208 00:30:46.291952  890932 command_runner.go:130] >   "status": {
	I1208 00:30:46.291955  890932 command_runner.go:130] >     "conditions": [
	I1208 00:30:46.291959  890932 command_runner.go:130] >       {
	I1208 00:30:46.291962  890932 command_runner.go:130] >         "message": "",
	I1208 00:30:46.291966  890932 command_runner.go:130] >         "reason": "",
	I1208 00:30:46.291973  890932 command_runner.go:130] >         "status": true,
	I1208 00:30:46.291983  890932 command_runner.go:130] >         "type": "RuntimeReady"
	I1208 00:30:46.291990  890932 command_runner.go:130] >       },
	I1208 00:30:46.291993  890932 command_runner.go:130] >       {
	I1208 00:30:46.292000  890932 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1208 00:30:46.292004  890932 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1208 00:30:46.292009  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292013  890932 command_runner.go:130] >         "type": "NetworkReady"
	I1208 00:30:46.292019  890932 command_runner.go:130] >       },
	I1208 00:30:46.292022  890932 command_runner.go:130] >       {
	I1208 00:30:46.292047  890932 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1208 00:30:46.292057  890932 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1208 00:30:46.292063  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292068  890932 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1208 00:30:46.292074  890932 command_runner.go:130] >       }
	I1208 00:30:46.292077  890932 command_runner.go:130] >     ]
	I1208 00:30:46.292080  890932 command_runner.go:130] >   }
	I1208 00:30:46.292083  890932 command_runner.go:130] > }
	I1208 00:30:46.295037  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:46.295064  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:46.295108  890932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:30:46.295135  890932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:30:46.295307  890932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:30:46.295389  890932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:30:46.302776  890932 command_runner.go:130] > kubeadm
	I1208 00:30:46.302853  890932 command_runner.go:130] > kubectl
	I1208 00:30:46.302863  890932 command_runner.go:130] > kubelet
	I1208 00:30:46.303600  890932 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:30:46.303710  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:30:46.311760  890932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:30:46.325760  890932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:30:46.340134  890932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 00:30:46.359100  890932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:30:46.362934  890932 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:30:46.363653  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:46.491856  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:47.343005  890932 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:30:47.343028  890932 certs.go:195] generating shared ca certs ...
	I1208 00:30:47.343054  890932 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:47.343240  890932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:30:47.343312  890932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:30:47.343326  890932 certs.go:257] generating profile certs ...
	I1208 00:30:47.343460  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:30:47.343536  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:30:47.343590  890932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:30:47.343612  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:30:47.343630  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:30:47.343655  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:30:47.343671  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:30:47.343691  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:30:47.343706  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:30:47.343719  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:30:47.343734  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:30:47.343800  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:30:47.343845  890932 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:30:47.343860  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:30:47.343888  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:30:47.343924  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:30:47.343960  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:30:47.344029  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:47.344078  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.344096  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem -> /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.344112  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.344800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:30:47.365934  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:30:47.392004  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:30:47.412283  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:30:47.434592  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:30:47.452176  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:30:47.471245  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:30:47.489925  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:30:47.511686  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:30:47.530800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:30:47.549900  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:30:47.568360  890932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:30:47.581856  890932 ssh_runner.go:195] Run: openssl version
	I1208 00:30:47.588310  890932 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:30:47.588394  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.596457  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:30:47.604012  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607834  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607889  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607941  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.648743  890932 command_runner.go:130] > 3ec20f2e
	I1208 00:30:47.649210  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:30:47.656730  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.664307  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:30:47.671943  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.675995  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676036  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676087  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.716996  890932 command_runner.go:130] > b5213941
	I1208 00:30:47.717090  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:30:47.724719  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.732215  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:30:47.740036  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744030  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744106  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744186  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.784659  890932 command_runner.go:130] > 51391683
	I1208 00:30:47.785207  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:30:47.792679  890932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796767  890932 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796815  890932 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:30:47.796824  890932 command_runner.go:130] > Device: 259,1	Inode: 3390890     Links: 1
	I1208 00:30:47.796831  890932 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:47.796838  890932 command_runner.go:130] > Access: 2025-12-08 00:26:39.668848968 +0000
	I1208 00:30:47.796844  890932 command_runner.go:130] > Modify: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796849  890932 command_runner.go:130] > Change: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796854  890932 command_runner.go:130] >  Birth: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796956  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:30:47.837955  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.838424  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:30:47.879403  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.879847  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:30:47.921180  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.921679  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:30:47.962513  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.963017  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:30:48.007633  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.007748  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:30:48.052514  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.052941  890932 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:48.053033  890932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:30:48.053097  890932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:30:48.081438  890932 cri.go:89] found id: ""
	I1208 00:30:48.081565  890932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:30:48.089271  890932 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:30:48.089305  890932 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:30:48.089313  890932 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:30:48.093391  890932 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:30:48.093432  890932 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:30:48.093495  890932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:30:48.102864  890932 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:30:48.103337  890932 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.103450  890932 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "functional-386544" cluster setting kubeconfig missing "functional-386544" context setting]
	I1208 00:30:48.103819  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.104260  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.104413  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.105009  890932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:30:48.105030  890932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:30:48.105036  890932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:30:48.105041  890932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:30:48.105047  890932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:30:48.105105  890932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:30:48.105315  890932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:30:48.117774  890932 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:30:48.117857  890932 kubeadm.go:602] duration metric: took 24.417752ms to restartPrimaryControlPlane
	I1208 00:30:48.117881  890932 kubeadm.go:403] duration metric: took 64.945899ms to StartCluster
	I1208 00:30:48.117925  890932 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.118025  890932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.118797  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.119107  890932 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 00:30:48.119487  890932 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:30:48.119575  890932 addons.go:70] Setting storage-provisioner=true in profile "functional-386544"
	I1208 00:30:48.119600  890932 addons.go:239] Setting addon storage-provisioner=true in "functional-386544"
	I1208 00:30:48.119601  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:48.119630  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.120591  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.119636  890932 addons.go:70] Setting default-storageclass=true in profile "functional-386544"
	I1208 00:30:48.120910  890932 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-386544"
	I1208 00:30:48.121235  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.122185  890932 out.go:179] * Verifying Kubernetes components...
	I1208 00:30:48.124860  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:48.159125  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.159302  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.159592  890932 addons.go:239] Setting addon default-storageclass=true in "functional-386544"
	I1208 00:30:48.159620  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.160038  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.170516  890932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:30:48.173762  890932 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.173784  890932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:30:48.173857  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.210938  890932 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:48.210964  890932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:30:48.211031  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.228251  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.254642  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.338576  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:48.365732  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.388846  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.094190  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094240  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094289  890932 retry.go:31] will retry after 221.572731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094347  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094353  890932 retry.go:31] will retry after 127.29639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094558  890932 node_ready.go:35] waiting up to 6m0s for node "functional-386544" to be "Ready" ...
	I1208 00:30:49.094733  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.094831  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.222592  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.293397  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.293520  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.293548  890932 retry.go:31] will retry after 191.192714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.316617  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.385398  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.389149  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.389192  890932 retry.go:31] will retry after 221.019406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.485459  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.544915  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.548575  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.548650  890932 retry.go:31] will retry after 430.912171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.594843  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.595415  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.610614  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.669839  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.669884  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.669904  890932 retry.go:31] will retry after 602.088887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.980400  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:50.054076  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.057921  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.057957  890932 retry.go:31] will retry after 1.251170732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.095196  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.095305  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:50.273088  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:50.333799  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.333898  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.333941  890932 retry.go:31] will retry after 841.525831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.595581  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.595651  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.595949  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:51.095803  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.096238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:51.096319  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:51.176619  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:51.234663  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.238362  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.238405  890932 retry.go:31] will retry after 1.674228806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.309626  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:51.370041  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.373759  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.373793  890932 retry.go:31] will retry after 1.825797421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.595251  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.595336  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.595859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.095576  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.095656  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.096001  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.594894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.595585  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.912970  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:52.971340  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:52.975027  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:52.975063  890932 retry.go:31] will retry after 2.158822419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.095343  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.095426  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.095834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:53.200381  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:53.262558  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:53.262597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.262618  890932 retry.go:31] will retry after 2.117348765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.595941  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.596038  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.596315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:53.596377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:54.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.095321  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:54.595354  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.595475  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.097427  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.097684  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.097999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.134417  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:55.207147  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.207186  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.207211  890932 retry.go:31] will retry after 1.888454669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.380583  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:55.442228  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.442305  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.442354  890932 retry.go:31] will retry after 2.144073799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.595860  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.595937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.596276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:56.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.095041  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:56.095552  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:56.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.595189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.094913  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.094995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.096590  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:57.159346  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.159395  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.159419  890932 retry.go:31] will retry after 2.451052222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.586888  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:57.595329  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.595647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.595917  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.644195  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.648428  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.648466  890932 retry.go:31] will retry after 6.27239315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:58.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.095132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:58.595202  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.595277  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.595673  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:58.595737  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:59.095382  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.095474  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.095817  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.595497  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.595641  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.595962  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.611138  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:59.678142  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:59.678192  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:59.678217  890932 retry.go:31] will retry after 3.668002843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:00.095797  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.096216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:00.594886  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.594963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.595392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:01.095660  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.095757  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.096070  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:01.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:01.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.594889  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.595445  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.094968  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.595407  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.346685  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:03.431951  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.432026  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.432051  890932 retry.go:31] will retry after 7.871453146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.595808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.595982  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.596320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:03.596392  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:03.921995  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:03.979614  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.984229  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.984264  890932 retry.go:31] will retry after 6.338984785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:04.095500  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.095579  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.095881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:04.595749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.596230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.094969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.594775  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.595280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:06.094874  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:06.095343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:06.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.594931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.596121  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:07.095769  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.095852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.096129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:07.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.595312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.594744  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.594830  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.595101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:08.595154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:09.094875  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:09.594884  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.594974  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.095326  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.095417  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.095739  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.324305  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:10.384998  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:10.385051  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.385071  890932 retry.go:31] will retry after 7.782157506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.595548  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.595897  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:10.595950  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:11.095753  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.095835  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:11.304608  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:11.367180  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:11.367234  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.367256  890932 retry.go:31] will retry after 13.123466664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.595455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.595807  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.095614  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.095694  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.095989  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.594814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:13.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.094906  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:13.095366  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:13.595620  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.094918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.095230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.594881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.595183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.094943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.095289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.594876  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.595270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:15.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:16.094820  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.094894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.095164  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:16.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.594908  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.595244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.095054  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.095138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.095471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.595816  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.595940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.596241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:17.596293  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:18.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.094955  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:18.168028  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:18.232113  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:18.232150  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.232169  890932 retry.go:31] will retry after 8.094581729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.595690  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.595775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.596183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.095628  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.096011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.594718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.594802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:20.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.095232  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:20.095311  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:20.595598  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.595793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.596357  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.095040  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.594912  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.595011  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.595362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.094750  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.094826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.095087  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.594811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:22.595315  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:23.094999  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.095088  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.095463  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:23.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.595136  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.094856  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.490869  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:24.557459  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:24.557507  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.557527  890932 retry.go:31] will retry after 14.933128441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.595841  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.595922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.596313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:24.596367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:25.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.095113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:25.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.595217  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.094904  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.095360  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.327725  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:26.388171  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:26.388210  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.388230  890932 retry.go:31] will retry after 17.607962094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.595498  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.595632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.595892  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:27.095752  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.095851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.096189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:27.096258  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:27.594738  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.095672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.096073  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.595257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.595836  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.595984  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.596331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:29.596385  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:30.095156  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.095252  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.095627  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:30.595556  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.596442  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.094732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.094808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.095102  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.594886  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.595210  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:32.094828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.095216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:32.095266  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:32.595760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.595841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.596354  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.095264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.594878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:34.094814  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.095244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:34.095287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:34.594940  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.595021  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.595365  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.094942  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.095029  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.595132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:36.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.094904  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.095255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:36.095316  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:36.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.595276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.095623  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.095696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.095973  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.594749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.594850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.595227  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:38.094987  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.095112  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:38.095555  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:38.595474  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.595556  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.595831  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.095726  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.095806  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.096148  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.491741  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:39.568327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:39.568372  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.568394  890932 retry.go:31] will retry after 16.95217324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.595718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.596632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.597031  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:40.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.095785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:40.096128  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:40.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.595175  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.094806  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.094893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.095209  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.595791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.596479  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.094922  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.095018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.095545  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.595373  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.595463  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.596518  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:31:42.596581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:43.095290  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.095363  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.095661  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.595732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.595812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.596157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.996743  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:44.061795  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:44.065597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.065636  890932 retry.go:31] will retry after 36.030777087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.094709  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.095134  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:44.595619  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.595689  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.596188  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:45.095192  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.095284  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:45.095814  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:45.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.595664  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.596700  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:46.095471  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.095564  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.095854  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:46.595665  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.595741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.596605  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:47.095443  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.095832  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:47.095881  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:47.595397  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.595480  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.595753  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.095688  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.095797  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.096203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.594869  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:49.095593  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.095675  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.096008  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:49.096067  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:49.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.594865  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.094833  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.596966  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:51.095757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.095841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:51.096238  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:51.594921  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.595014  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.595361  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.094793  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.095231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.594828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.094823  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.594757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.594827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.595090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:53.595131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:54.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.095337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:54.595028  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.595111  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.595443  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.095154  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.095659  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.595576  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.595659  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.595995  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:55.596040  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:56.094916  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:56.520835  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:56.580569  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580606  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580706  890932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:31:56.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.595785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.596127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.094846  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.594959  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.595375  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:58.095719  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.095802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.096233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:58.096313  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:58.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.095006  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.095098  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.095434  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.594763  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.594848  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.595114  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.095594  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.595570  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.596011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:00.596082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:01.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.096010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.595794  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.596311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:02.596371  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:03.095057  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.095145  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.095500  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:03.595367  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.595442  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.095519  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.095642  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.096000  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.595726  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.595814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.596263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:05.095616  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.095688  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.095960  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:05.096006  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:05.595742  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.595817  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.595654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.596003  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:07.095781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.095861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.096199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:07.096254  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:07.594824  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.594910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.094781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.095147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.595140  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.595213  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.595560  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.095144  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.095234  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.095578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.595126  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.595198  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.595458  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:09.595499  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.095157  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.095251  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.095657  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.595220  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.595297  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.595648  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.095385  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.095455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.095752  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.595492  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.595574  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.595922  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:11.595978  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.095855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.594787  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.595182  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.094907  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.094987  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.095332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.595577  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:13.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:14.095571  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.095649  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.095941  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.595772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.595853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.596231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.094898  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.095334  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.595180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:16.095326  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:16.595008  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.595092  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.595453  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.094788  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.095049  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.595151  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.094845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.595685  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.595803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:18.596225  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:19.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.095319  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.594891  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.594970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.595320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.094805  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.095201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.097611  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:20.173666  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173721  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173816  890932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:32:20.177110  890932 out.go:179] * Enabled addons: 
	I1208 00:32:20.180584  890932 addons.go:530] duration metric: took 1m32.061097112s for enable addons: enabled=[]
	I1208 00:32:20.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.595353  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.095445  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.095926  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:21.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.596006  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.094730  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.094810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.095155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.095654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.095734  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.096034  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.096082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.595243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.094924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.595670  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.595754  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.095811  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.095896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.096308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.096381  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:25.594842  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.095626  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.095702  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.095977  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.595770  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.596206  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.095271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:27.595194  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.095355  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:28.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.595399  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.095084  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.595122  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.595197  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.595539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:29.595597  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:30.095160  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.095253  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.595339  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.595701  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.095621  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.095959  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.595634  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.596065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:31.596120  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:32.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.595231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.094941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.594792  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:34.095348  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:34.595041  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.595122  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.095733  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.096082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.594826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.595179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.595680  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.595807  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.596074  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:36.596131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:37.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.094901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.095188  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.095665  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.594842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.595165  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.095320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:39.095377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:39.594776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.595118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.095374  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.595107  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.595184  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.595524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.094737  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.094813  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.594877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:41.595246  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.095448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.595773  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.595847  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.094911  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.095007  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.095748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.594832  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:32:43.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.095588  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.095673  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.095930  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.595731  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.595809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.094947  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.095058  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.095377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.595628  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.595984  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.095846  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.095977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.096455  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.096521  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:46.595187  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.595275  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.595599  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.095341  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.095628  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.595415  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.595489  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.095646  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.595037  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.595138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.595519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:48.595573  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.094988  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.095539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.595369  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.595785  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.095604  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.095687  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.595195  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.095341  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.595377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.096075  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.096116  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:53.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.095368  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.095450  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.095808  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.595724  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.596055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.094748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.094827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.595264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.095731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.096035  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.594732  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.594809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.094761  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.094840  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.595660  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.595745  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.596013  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:57.596066  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.094869  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.095204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.595193  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.595274  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.595658  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.095380  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.095459  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.095744  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.595525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.595604  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.595972  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.094758  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.095388  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.595177  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.594943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.595298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.094875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.594959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.595287  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.595345  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:03.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.095115  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.595322  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.595402  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.095474  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.095555  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.095896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.595687  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.595771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.596109  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.596164  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:05.095698  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.095821  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.096157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.594941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.595621  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.595983  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.095871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.096215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:07.096271  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.094792  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.094883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.095200  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.594923  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.094834  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.094921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.095276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.595681  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.595759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.596030  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:09.596071  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:10.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.095180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.595171  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.094783  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.095169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.594859  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:12.095394  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:12.595611  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.595686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.595968  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.095748  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.095829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.594964  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.595409  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.095085  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.095492  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:14.095548  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:14.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.094998  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.095079  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.095428  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.595113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.095424  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.595117  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.595199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:16.595612  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.095280  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.095347  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.095662  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.595238  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.595324  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.595678  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.095532  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.095611  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.095982  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.595098  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.094863  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.095323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.095387  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.095680  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.594771  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.094914  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.095000  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.095330  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.594854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.595147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.595196  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.094961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.594844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.594926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.095693  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.095771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.595065  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.595151  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:23.595587  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.095271  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.095360  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.595131  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.595202  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.595547  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.095305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.595099  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.095122  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.095199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.095487  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.095533  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.594948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.095043  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.095128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.595145  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.595211  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.595478  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.095182  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.095261  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.095626  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.095683  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:28.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.595718  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.596082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.096085  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.595201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.095390  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.595110  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.595152  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.094935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.594832  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.594907  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.595282  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.595155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.094850  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.594897  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.594986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.595405  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.595460  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.094996  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.095074  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.095402  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.595458  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.096170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.595656  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.596020  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:36.596064  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.095648  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.095004  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.095492  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.595152  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.595232  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.595511  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.095291  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.594994  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.595449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.095710  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.095987  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.096029  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:41.595774  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.595854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.094945  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.095447  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.594810  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.594880  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.095281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.595149  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.595226  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.595578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:43.595639  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.095775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.096055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.594909  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.595246  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.595704  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.595779  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:45.596188  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.095300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.094796  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.094870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.095149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.594854  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.095000  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.095085  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.095460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.095511  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:48.595390  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.595476  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.595748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.095572  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.095647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.095999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.595785  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.596224  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.095203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.594890  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.595313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:50.595368  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.094876  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.095346  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.595652  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.095805  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.096192  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.594926  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.595378  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:52.595433  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.094786  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.094881  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.094965  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.595582  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.595660  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.595948  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:54.595991  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:55.095773  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.095890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.096222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.095686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.095975  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.595757  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.595833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.596223  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:56.596285  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:57.094855  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.095265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.595601  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.595670  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.595954  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.095735  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.095811  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.096159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.594840  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.594919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.095680  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.095963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:59.096015  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:59.595776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.595860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.596187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.094948  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.095044  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.594807  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.595187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.594909  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.595385  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:01.595446  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:02.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.095145  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.594938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.095022  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.095477  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.595437  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.595711  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:03.595753  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:04.095511  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.095589  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.095964  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.595805  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.595893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.596256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.094892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.095280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.595525  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:06.095367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:06.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.595230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.095222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.594993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.595353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.095670  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.095741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:08.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:08.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.595235  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.094980  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.595609  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.595691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.595986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.594928  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.595018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.595327  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:10.595376  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:11.094812  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.094900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.595130  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.094818  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.094897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:13.095308  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:13.594998  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.595102  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.595450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.095713  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.095782  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.595722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.595804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.094925  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.095362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:15.095419  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:15.595024  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.595091  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.595369  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.094880  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.595018  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.595096  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.595400  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.095070  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.095425  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:17.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:17.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.594950  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.595419  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.094891  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.094971  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.595365  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.595444  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.595738  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.095230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.095306  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.095655  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:19.095709  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.595477  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.595895  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.095714  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.096185  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.595647  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.595727  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.596033  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:21.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:22.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.594975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.595067  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.595475  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.094995  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.095075  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.095444  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.095501  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:24.595780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.595858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.596186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.594923  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.595001  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.595307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:26.595299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.094975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.595139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.095238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.595230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.595311  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.595664  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:28.595719  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:29.095432  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.095508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.095787  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.595520  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.595592  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.595939  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.095902  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.096081  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.096554  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.595239  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.595307  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.595584  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.095490  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.095572  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.095910  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:31.095974  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:31.595725  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.595808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.095639  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.095709  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.595814  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.595902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.596323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.094963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.095317  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.595336  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.595409  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:33.595717  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.095551  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.095629  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.095979  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.595790  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.595304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.094956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.095379  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:36.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.595663  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.595963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.095729  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.095810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.096161  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.594967  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.096015  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.096058  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:38.595114  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.595196  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.595553  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.095363  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.095447  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.095793  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.595427  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.595881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.095722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.095803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.096152  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:40.096206  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:40.594823  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.094864  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.594936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.095093  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.095185  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.095576  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.595787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.596105  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:42.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.094873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.594881  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.595259  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.594969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.595448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.094892  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.095006  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.095381  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.095448  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:45.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.595342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.094959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.095359  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.594941  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.595022  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.094744  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.094819  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.095101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.594892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:47.595343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.095015  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.095449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.595544  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.595623  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.595896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.095113  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.095208  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.095687  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.595013  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.595100  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.595440  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:49.595488  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.095418  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.095709  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.595564  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.595644  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.094776  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.094860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.095248  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.595659  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.596066  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:51.596155  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:52.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.595055  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.595136  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.595494  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.094787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.095068  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.595063  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.595142  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.595460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:54.095354  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:54.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.595721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.596067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.094878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.095629  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.095704  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.096027  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:56.096075  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:56.594762  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.594845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.094921  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.095004  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.595149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.595381  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.595460  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.595812  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:58.595863  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:59.095337  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.095413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.095693  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.595446  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.595531  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.595875  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.095754  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.096197  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.595251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.095251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:01.095307  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:01.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.594896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.595215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.094762  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.094836  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.095122  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.094822  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.094902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.595638  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.595707  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.595996  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:03.596048  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:04.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.095254  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.594969  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.595053  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.595398  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.095017  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.095353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.095490  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:06.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.594852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.595138  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.095278  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.595328  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.595759  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.095454  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.095798  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:08.095845  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:08.594864  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.595176  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.094991  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.095347  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.595069  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.595526  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:10.595586  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:11.095553  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.095640  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.095940  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.094800  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.094884  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.594901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.595178  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.094902  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.094979  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:13.095363  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:13.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.595301  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.595667  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.095197  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.095270  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.095550  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.595257  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.595339  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.595725  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.095585  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.096126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:15.096187  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.594862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.595129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.094866  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.595015  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.094867  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.594835  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.595233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:17.595290  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:18.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.595435  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.595508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.595780  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.096078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.595160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.094736  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.094818  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.095118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:20.095178  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:20.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.095027  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.094877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.095219  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:22.095276  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:22.594927  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.595337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.094807  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.094885  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.594870  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.595296  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:24.095323  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.596024  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.095316  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.594932  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.595268  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.095967  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:26.096009  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.594774  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.595220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.095279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.595172  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.594890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:28.595297  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:29.095621  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.594756  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.594833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.595168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.094972  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.095501  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.594791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.094879  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:31.095357  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:31.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.094855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.594962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.595305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.095132  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.095524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:33.095581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.594888  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.594964  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.595242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.095392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.594946  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.595024  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.595376  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.095094  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.095178  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.095522  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.594940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.595291  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:36.094854  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.095261  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.594784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.594860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.595344  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:38.095615  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.095691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.095993  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.094849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.594893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.595159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.094832  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.094914  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:40.095383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.595050  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.595133  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.095165  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.095247  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.595432  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.595533  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.595908  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.095710  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.096304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.096383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.594857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.595136  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.595212  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.595549  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.095717  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.095796  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.096072  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.595340  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.595058  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.595128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.595471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.095186  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.095266  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.595402  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.595481  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.595824  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.595879  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.595618  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.595696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.095799  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.096202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.595337  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.595413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.595706  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.095444  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.095524  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.095902  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.095961  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.595540  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.595625  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.595976  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.095709  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.096095  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.594799  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.094959  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.095064  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.095433  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.594801  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.595287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.094975  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.095331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.595042  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.595124  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.595480  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.095139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.594836  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.595282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.595384  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.094872  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.095335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.595658  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.596021  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.094747  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.094842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.095674  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.096062  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.096108  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.594963  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.595039  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.595371  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.094934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.594883  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.594996  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.595394  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.095103  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.095186  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.095515  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.596169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.596227  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:59.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.096039  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.594804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.595133  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.094929  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.095015  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.095342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.595183  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.595265  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.595623  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.095438  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.095859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:01.095916  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.595708  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.596080  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.594816  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.595265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.594768  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:03.595263  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:04.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.594813  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.594897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.095720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.595766  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.596299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:06.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.095304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.595639  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.595720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.596054  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.094760  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.094857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.594898  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.594972  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.595325  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.095635  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.095713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.095986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:08.096028  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:08.595142  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.595227  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.595555  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.095287  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.095364  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.095690  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.595392  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.595461  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.095517  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.595700  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.595784  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:10.596216  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:11.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.094850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.595266  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.094980  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.095061  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.095386  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.595126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.094912  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:13.095347  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:13.595000  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.595104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.595437  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.095097  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.095172  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.095450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.595177  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.595281  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.595679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.095515  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.095616  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:15.096068  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:15.595565  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.595677  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.595994  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.094724  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.094815  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.095174  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.594934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.594829  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:17.595272  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:18.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.594793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.595065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.094767  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.595263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.595322  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:20.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.096099  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.594887  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.094954  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.095363  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.595677  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.595750  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.596077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.596147  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:22.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.094938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.095256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.095643  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.095723  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.096019  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.595048  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.595147  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.595567  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.095396  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.095478  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:24.095979  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:24.595457  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.595529  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.095586  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.095668  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.595744  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.595838  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.596274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.095659  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.096092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:26.096144  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:26.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.094928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.095252  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.594876  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.595170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.595430  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.595768  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:28.595815  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:29.095242  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.095310  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.095629  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.595200  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.595280  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.595637  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.095238  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.095745  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.595479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.595834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.595883  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:31.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.095759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.096119  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.594834  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.594916  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.595238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.094743  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.094812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.095077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.595202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.094957  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.095413  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:33.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.595484  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.595560  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.595823  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.095676  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.095765  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.096127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.094778  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.594899  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.594983  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.595332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.595390  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:36.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.095205  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.095569  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.595347  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.595414  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.095479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.095557  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.095923  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.596092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.596146  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:38.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.095685  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.095965  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.595156  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.595538  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.095256  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.095679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.595429  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.595772  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.095636  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.095721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.096088  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:40.096154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.594798  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.094802  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.095218  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.594911  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.594990  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.094885  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.094978  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.095379  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.595088  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.595162  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.595436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.595481  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:43.094831  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.095253  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.595271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.095602  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.095672  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.595789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.595878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.596229  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.596286  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:45.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.095095  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.095519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.594961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.595315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.594993  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.595451  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.094789  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.095057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:47.095099  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.594746  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.595163  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.095765  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.096257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.594789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.094763  890932 type.go:168] "Request Body" body=""
	I1208 00:36:49.094842  890932 node_ready.go:38] duration metric: took 6m0.000209264s for node "functional-386544" to be "Ready" ...
	I1208 00:36:49.097838  890932 out.go:203] 
	W1208 00:36:49.100712  890932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:36:49.100735  890932 out.go:285] * 
	W1208 00:36:49.102896  890932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:36:49.105576  890932 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:36:56 functional-386544 containerd[5240]: time="2025-12-08T00:36:56.516755869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.589473531Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.594729164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.602007811Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.602698149Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.551574100Z" level=info msg="No images store for sha256:c14738a2f1514ed626f87e54ffcc2c6b1b148fca2fa2ca921f3182937ee7a0a8"
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.554054710Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-386544\""
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.561256072Z" level=info msg="ImageCreate event name:\"sha256:ab3bd7310ba004a6221e62971b0d92cf8ea1c77a8c7be89dbbba101e42fb246f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.561898500Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.382483877Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.385056715Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.387136320Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.399049776Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.438058682Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.440899083Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.444347475Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.451112114Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.638310190Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.641281916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.648394300Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.648713106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.824579526Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.826728482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.833855512Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.834577965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:37:02.669994    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:02.670711    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:02.672372    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:02.672761    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:02.674357    9276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:37:02 up  5:19,  0 user,  load average: 0.47, 0.43, 1.06
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:36:59 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:36:59 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 08 00:36:59 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:59 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:36:59 functional-386544 kubelet[9048]: E1208 00:36:59.890737    9048 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:36:59 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:36:59 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:00 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 08 00:37:00 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:00 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:00 functional-386544 kubelet[9126]: E1208 00:37:00.661005    9126 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:00 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:00 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:01 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 08 00:37:01 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:01 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:01 functional-386544 kubelet[9173]: E1208 00:37:01.397497    9173 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:01 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:01 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 08 00:37:02 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:02 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:02 functional-386544 kubelet[9193]: E1208 00:37:02.164520    9193 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (383.289009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-386544 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-386544 get pods: exit status 1 (113.342223ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-386544 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (326.398502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 logs -n 25: (1.043159855s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-932121 image ls --format short --alsologtostderr                                                                                             │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format yaml --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format json --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format table --alsologtostderr                                                                                             │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh     │ functional-932121 ssh pgrep buildkitd                                                                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image   │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                  │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls                                                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete  │ -p functional-932121                                                                                                                                    │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start   │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ start   │ -p functional-386544 --alsologtostderr -v=8                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:30 UTC │                     │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:latest                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add minikube-local-cache-test:functional-386544                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache delete minikube-local-cache-test:functional-386544                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl images                                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │                     │
	│ cache   │ functional-386544 cache reload                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:37 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ kubectl │ functional-386544 kubectl -- --context functional-386544 get pods                                                                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:30:43
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:30:43.106195  890932 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:30:43.106412  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106440  890932 out.go:374] Setting ErrFile to fd 2...
	I1208 00:30:43.106489  890932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:30:43.106802  890932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:30:43.107327  890932 out.go:368] Setting JSON to false
	I1208 00:30:43.108252  890932 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18796,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:30:43.108353  890932 start.go:143] virtualization:  
	I1208 00:30:43.111927  890932 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:30:43.114895  890932 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:30:43.114974  890932 notify.go:221] Checking for updates...
	I1208 00:30:43.121042  890932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:30:43.124118  890932 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:43.127146  890932 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:30:43.130017  890932 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:30:43.132953  890932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:30:43.136385  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:43.136518  890932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:30:43.171722  890932 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:30:43.171844  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.232988  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.222800102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.233101  890932 docker.go:319] overlay module found
	I1208 00:30:43.236209  890932 out.go:179] * Using the docker driver based on existing profile
	I1208 00:30:43.239024  890932 start.go:309] selected driver: docker
	I1208 00:30:43.239046  890932 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.240193  890932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:30:43.240306  890932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:30:43.299458  890932 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:30:43.288388391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:30:43.299888  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:43.299955  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:43.300012  890932 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:43.303163  890932 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:30:43.305985  890932 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:30:43.309025  890932 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:30:43.312042  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:43.312102  890932 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:30:43.312113  890932 cache.go:65] Caching tarball of preloaded images
	I1208 00:30:43.312160  890932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:30:43.312254  890932 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:30:43.312266  890932 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:30:43.312379  890932 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:30:43.332475  890932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:30:43.332500  890932 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:30:43.332516  890932 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:30:43.332550  890932 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:30:43.332614  890932 start.go:364] duration metric: took 40.517µs to acquireMachinesLock for "functional-386544"
	I1208 00:30:43.332637  890932 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:30:43.332643  890932 fix.go:54] fixHost starting: 
	I1208 00:30:43.332918  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:43.364362  890932 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:30:43.364391  890932 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:30:43.367522  890932 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:30:43.367561  890932 machine.go:94] provisionDockerMachine start ...
	I1208 00:30:43.367667  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.390594  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.390943  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.390953  890932 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:30:43.546039  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.546064  890932 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:30:43.546132  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.563909  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.564221  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.564240  890932 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:30:43.728055  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:30:43.728136  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:43.746428  890932 main.go:143] libmachine: Using SSH client type: native
	I1208 00:30:43.746778  890932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:30:43.746805  890932 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:30:43.898980  890932 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:30:43.899007  890932 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:30:43.899068  890932 ubuntu.go:190] setting up certificates
	I1208 00:30:43.899078  890932 provision.go:84] configureAuth start
	I1208 00:30:43.899155  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:43.917225  890932 provision.go:143] copyHostCerts
	I1208 00:30:43.917271  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917317  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:30:43.917335  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:30:43.917414  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:30:43.917515  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917537  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:30:43.917547  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:30:43.917575  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:30:43.917632  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917656  890932 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:30:43.917664  890932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:30:43.917691  890932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:30:43.917796  890932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:30:44.201729  890932 provision.go:177] copyRemoteCerts
	I1208 00:30:44.201799  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:30:44.201847  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.218852  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.326622  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1208 00:30:44.326687  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:30:44.345138  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1208 00:30:44.345250  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:30:44.363475  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1208 00:30:44.363575  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:30:44.382571  890932 provision.go:87] duration metric: took 483.468304ms to configureAuth
	I1208 00:30:44.382643  890932 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:30:44.382843  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:44.382857  890932 machine.go:97] duration metric: took 1.015288541s to provisionDockerMachine
	I1208 00:30:44.382865  890932 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:30:44.382880  890932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:30:44.382939  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:30:44.382987  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.401380  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.506846  890932 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:30:44.510586  890932 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1208 00:30:44.510612  890932 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1208 00:30:44.510623  890932 command_runner.go:130] > VERSION_ID="12"
	I1208 00:30:44.510628  890932 command_runner.go:130] > VERSION="12 (bookworm)"
	I1208 00:30:44.510633  890932 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1208 00:30:44.510637  890932 command_runner.go:130] > ID=debian
	I1208 00:30:44.510641  890932 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1208 00:30:44.510646  890932 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1208 00:30:44.510652  890932 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1208 00:30:44.510734  890932 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:30:44.510755  890932 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:30:44.510768  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:30:44.510833  890932 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:30:44.510921  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:30:44.510932  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /etc/ssl/certs/8467112.pem
	I1208 00:30:44.511028  890932 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:30:44.511037  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> /etc/test/nested/copy/846711/hosts
	I1208 00:30:44.511082  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:30:44.518977  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:44.538494  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:30:44.556928  890932 start.go:296] duration metric: took 174.046033ms for postStartSetup
	I1208 00:30:44.557012  890932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:30:44.557057  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.579278  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.683552  890932 command_runner.go:130] > 11%
	I1208 00:30:44.683622  890932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:30:44.688016  890932 command_runner.go:130] > 174G
	I1208 00:30:44.688056  890932 fix.go:56] duration metric: took 1.355411206s for fixHost
	I1208 00:30:44.688067  890932 start.go:83] releasing machines lock for "functional-386544", held for 1.355443108s
	I1208 00:30:44.688146  890932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:30:44.705277  890932 ssh_runner.go:195] Run: cat /version.json
	I1208 00:30:44.705345  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.705617  890932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:30:44.705687  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:44.723084  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.728238  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:44.826153  890932 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1208 00:30:44.826300  890932 ssh_runner.go:195] Run: systemctl --version
	I1208 00:30:44.917784  890932 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1208 00:30:44.920412  890932 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1208 00:30:44.920484  890932 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1208 00:30:44.920574  890932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1208 00:30:44.924900  890932 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1208 00:30:44.925095  890932 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:30:44.925215  890932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:30:44.933474  890932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:30:44.933497  890932 start.go:496] detecting cgroup driver to use...
	I1208 00:30:44.933530  890932 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:30:44.933580  890932 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:30:44.950010  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:30:44.963687  890932 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:30:44.963783  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:30:44.980391  890932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:30:44.994304  890932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:30:45.255981  890932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:30:45.407305  890932 docker.go:234] disabling docker service ...
	I1208 00:30:45.407423  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:30:45.423468  890932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:30:45.437222  890932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:30:45.561603  890932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:30:45.705878  890932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:30:45.719726  890932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:30:45.733506  890932 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1208 00:30:45.735147  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:30:45.744694  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:30:45.753960  890932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:30:45.754081  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:30:45.763511  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.772723  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:30:45.781584  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:30:45.790600  890932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:30:45.799135  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:30:45.808317  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:30:45.817244  890932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:30:45.826211  890932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:30:45.833037  890932 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1208 00:30:45.834008  890932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:30:45.841603  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:45.965344  890932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:30:46.100261  890932 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:30:46.100385  890932 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:30:46.104210  890932 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1208 00:30:46.104295  890932 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1208 00:30:46.104358  890932 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1208 00:30:46.104385  890932 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:46.104410  890932 command_runner.go:130] > Access: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104446  890932 command_runner.go:130] > Modify: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104470  890932 command_runner.go:130] > Change: 2025-12-08 00:30:46.048263746 +0000
	I1208 00:30:46.104490  890932 command_runner.go:130] >  Birth: -
	I1208 00:30:46.104859  890932 start.go:564] Will wait 60s for crictl version
	I1208 00:30:46.104961  890932 ssh_runner.go:195] Run: which crictl
	I1208 00:30:46.108543  890932 command_runner.go:130] > /usr/local/bin/crictl
	I1208 00:30:46.108924  890932 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:30:46.136367  890932 command_runner.go:130] > Version:  0.1.0
	I1208 00:30:46.136449  890932 command_runner.go:130] > RuntimeName:  containerd
	I1208 00:30:46.136470  890932 command_runner.go:130] > RuntimeVersion:  v2.2.0
	I1208 00:30:46.136491  890932 command_runner.go:130] > RuntimeApiVersion:  v1
	I1208 00:30:46.136542  890932 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:30:46.136636  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.156742  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.159302  890932 ssh_runner.go:195] Run: containerd --version
	I1208 00:30:46.181269  890932 command_runner.go:130] > containerd containerd.io v2.2.0 1c4457e00facac03ce1d75f7b6777a7a851e5c41
	I1208 00:30:46.189080  890932 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:30:46.192076  890932 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:30:46.209081  890932 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:30:46.212923  890932 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1208 00:30:46.213097  890932 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:30:46.213209  890932 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:30:46.213289  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.236482  890932 command_runner.go:130] > {
	I1208 00:30:46.236506  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.236511  890932 command_runner.go:130] >     {
	I1208 00:30:46.236520  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.236526  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236531  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.236534  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236538  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236551  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.236558  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236563  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.236571  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236576  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236586  890932 command_runner.go:130] >     },
	I1208 00:30:46.236590  890932 command_runner.go:130] >     {
	I1208 00:30:46.236601  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.236605  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236610  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.236617  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236622  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236632  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.236641  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236646  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.236649  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236654  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236664  890932 command_runner.go:130] >     },
	I1208 00:30:46.236668  890932 command_runner.go:130] >     {
	I1208 00:30:46.236675  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.236679  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236687  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.236690  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236699  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236718  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.236722  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236728  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.236733  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.236740  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236743  890932 command_runner.go:130] >     },
	I1208 00:30:46.236746  890932 command_runner.go:130] >     {
	I1208 00:30:46.236753  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.236760  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236766  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.236769  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236773  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236781  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.236788  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236792  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.236803  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236808  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236814  890932 command_runner.go:130] >       },
	I1208 00:30:46.236821  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236825  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236828  890932 command_runner.go:130] >     },
	I1208 00:30:46.236832  890932 command_runner.go:130] >     {
	I1208 00:30:46.236841  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.236847  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236853  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.236856  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236860  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236868  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.236874  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236879  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.236885  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236894  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236901  890932 command_runner.go:130] >       },
	I1208 00:30:46.236908  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236912  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.236916  890932 command_runner.go:130] >     },
	I1208 00:30:46.236926  890932 command_runner.go:130] >     {
	I1208 00:30:46.236933  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.236937  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.236943  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.236947  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236951  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.236962  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.236968  890932 command_runner.go:130] >       ],
	I1208 00:30:46.236972  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.236976  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.236980  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.236989  890932 command_runner.go:130] >       },
	I1208 00:30:46.236993  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.236997  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237002  890932 command_runner.go:130] >     },
	I1208 00:30:46.237005  890932 command_runner.go:130] >     {
	I1208 00:30:46.237012  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.237017  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237024  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.237027  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237032  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237040  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.237047  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237055  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.237059  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237063  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237066  890932 command_runner.go:130] >     },
	I1208 00:30:46.237076  890932 command_runner.go:130] >     {
	I1208 00:30:46.237084  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.237095  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237104  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.237107  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237112  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237120  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.237126  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237131  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.237134  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237139  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.237142  890932 command_runner.go:130] >       },
	I1208 00:30:46.237146  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237153  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.237157  890932 command_runner.go:130] >     },
	I1208 00:30:46.237166  890932 command_runner.go:130] >     {
	I1208 00:30:46.237173  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.237178  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.237182  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.237189  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237193  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.237201  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.237206  890932 command_runner.go:130] >       ],
	I1208 00:30:46.237210  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.237216  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.237221  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.237227  890932 command_runner.go:130] >       },
	I1208 00:30:46.237231  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.237235  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.237238  890932 command_runner.go:130] >     }
	I1208 00:30:46.237241  890932 command_runner.go:130] >   ]
	I1208 00:30:46.237244  890932 command_runner.go:130] > }
	I1208 00:30:46.239834  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.239857  890932 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:30:46.239919  890932 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:30:46.262227  890932 command_runner.go:130] > {
	I1208 00:30:46.262250  890932 command_runner.go:130] >   "images":  [
	I1208 00:30:46.262255  890932 command_runner.go:130] >     {
	I1208 00:30:46.262265  890932 command_runner.go:130] >       "id":  "sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1208 00:30:46.262280  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262286  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1208 00:30:46.262289  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262293  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262303  890932 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"
	I1208 00:30:46.262310  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262315  890932 command_runner.go:130] >       "size":  "40636774",
	I1208 00:30:46.262319  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262323  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262326  890932 command_runner.go:130] >     },
	I1208 00:30:46.262330  890932 command_runner.go:130] >     {
	I1208 00:30:46.262348  890932 command_runner.go:130] >       "id":  "sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1208 00:30:46.262357  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262363  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1208 00:30:46.262366  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262370  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262381  890932 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1208 00:30:46.262386  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262392  890932 command_runner.go:130] >       "size":  "8034419",
	I1208 00:30:46.262396  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262400  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262403  890932 command_runner.go:130] >     },
	I1208 00:30:46.262406  890932 command_runner.go:130] >     {
	I1208 00:30:46.262413  890932 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1208 00:30:46.262427  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262439  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1208 00:30:46.262476  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262489  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262498  890932 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"
	I1208 00:30:46.262502  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262506  890932 command_runner.go:130] >       "size":  "21168808",
	I1208 00:30:46.262513  890932 command_runner.go:130] >       "username":  "nonroot",
	I1208 00:30:46.262517  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262524  890932 command_runner.go:130] >     },
	I1208 00:30:46.262531  890932 command_runner.go:130] >     {
	I1208 00:30:46.262539  890932 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1208 00:30:46.262542  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262548  890932 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1208 00:30:46.262553  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262557  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262565  890932 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"
	I1208 00:30:46.262568  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262572  890932 command_runner.go:130] >       "size":  "21136588",
	I1208 00:30:46.262579  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262583  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262588  890932 command_runner.go:130] >       },
	I1208 00:30:46.262592  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262605  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262609  890932 command_runner.go:130] >     },
	I1208 00:30:46.262612  890932 command_runner.go:130] >     {
	I1208 00:30:46.262619  890932 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1208 00:30:46.262625  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262631  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1208 00:30:46.262634  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262638  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262646  890932 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"
	I1208 00:30:46.262649  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262654  890932 command_runner.go:130] >       "size":  "24678359",
	I1208 00:30:46.262660  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262678  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262686  890932 command_runner.go:130] >       },
	I1208 00:30:46.262690  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262694  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262697  890932 command_runner.go:130] >     },
	I1208 00:30:46.262701  890932 command_runner.go:130] >     {
	I1208 00:30:46.262707  890932 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1208 00:30:46.262718  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262724  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1208 00:30:46.262727  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262731  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262739  890932 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"
	I1208 00:30:46.262745  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262749  890932 command_runner.go:130] >       "size":  "20661043",
	I1208 00:30:46.262755  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262759  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262772  890932 command_runner.go:130] >       },
	I1208 00:30:46.262776  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262780  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262783  890932 command_runner.go:130] >     },
	I1208 00:30:46.262786  890932 command_runner.go:130] >     {
	I1208 00:30:46.262793  890932 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1208 00:30:46.262800  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262805  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1208 00:30:46.262809  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262812  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262819  890932 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1208 00:30:46.262823  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262827  890932 command_runner.go:130] >       "size":  "22429671",
	I1208 00:30:46.262834  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262838  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262844  890932 command_runner.go:130] >     },
	I1208 00:30:46.262848  890932 command_runner.go:130] >     {
	I1208 00:30:46.262857  890932 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1208 00:30:46.262867  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262876  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1208 00:30:46.262882  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262886  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262893  890932 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"
	I1208 00:30:46.262907  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262915  890932 command_runner.go:130] >       "size":  "15391364",
	I1208 00:30:46.262919  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.262922  890932 command_runner.go:130] >         "value":  "0"
	I1208 00:30:46.262929  890932 command_runner.go:130] >       },
	I1208 00:30:46.262933  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.262943  890932 command_runner.go:130] >       "pinned":  false
	I1208 00:30:46.262947  890932 command_runner.go:130] >     },
	I1208 00:30:46.262950  890932 command_runner.go:130] >     {
	I1208 00:30:46.262957  890932 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1208 00:30:46.262963  890932 command_runner.go:130] >       "repoTags":  [
	I1208 00:30:46.262968  890932 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1208 00:30:46.262971  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262975  890932 command_runner.go:130] >       "repoDigests":  [
	I1208 00:30:46.262982  890932 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"
	I1208 00:30:46.262985  890932 command_runner.go:130] >       ],
	I1208 00:30:46.262990  890932 command_runner.go:130] >       "size":  "267939",
	I1208 00:30:46.262996  890932 command_runner.go:130] >       "uid":  {
	I1208 00:30:46.263000  890932 command_runner.go:130] >         "value":  "65535"
	I1208 00:30:46.263013  890932 command_runner.go:130] >       },
	I1208 00:30:46.263017  890932 command_runner.go:130] >       "username":  "",
	I1208 00:30:46.263021  890932 command_runner.go:130] >       "pinned":  true
	I1208 00:30:46.263024  890932 command_runner.go:130] >     }
	I1208 00:30:46.263027  890932 command_runner.go:130] >   ]
	I1208 00:30:46.263031  890932 command_runner.go:130] > }
	I1208 00:30:46.265493  890932 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:30:46.265517  890932 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:30:46.265524  890932 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:30:46.265625  890932 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:30:46.265699  890932 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:30:46.291229  890932 command_runner.go:130] > {
	I1208 00:30:46.291250  890932 command_runner.go:130] >   "cniconfig": {
	I1208 00:30:46.291256  890932 command_runner.go:130] >     "Networks": [
	I1208 00:30:46.291260  890932 command_runner.go:130] >       {
	I1208 00:30:46.291266  890932 command_runner.go:130] >         "Config": {
	I1208 00:30:46.291271  890932 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1208 00:30:46.291283  890932 command_runner.go:130] >           "Name": "cni-loopback",
	I1208 00:30:46.291288  890932 command_runner.go:130] >           "Plugins": [
	I1208 00:30:46.291292  890932 command_runner.go:130] >             {
	I1208 00:30:46.291297  890932 command_runner.go:130] >               "Network": {
	I1208 00:30:46.291301  890932 command_runner.go:130] >                 "ipam": {},
	I1208 00:30:46.291307  890932 command_runner.go:130] >                 "type": "loopback"
	I1208 00:30:46.291311  890932 command_runner.go:130] >               },
	I1208 00:30:46.291322  890932 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1208 00:30:46.291326  890932 command_runner.go:130] >             }
	I1208 00:30:46.291334  890932 command_runner.go:130] >           ],
	I1208 00:30:46.291344  890932 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1208 00:30:46.291348  890932 command_runner.go:130] >         },
	I1208 00:30:46.291356  890932 command_runner.go:130] >         "IFName": "lo"
	I1208 00:30:46.291362  890932 command_runner.go:130] >       }
	I1208 00:30:46.291366  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291371  890932 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1208 00:30:46.291375  890932 command_runner.go:130] >     "PluginDirs": [
	I1208 00:30:46.291379  890932 command_runner.go:130] >       "/opt/cni/bin"
	I1208 00:30:46.291390  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291395  890932 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1208 00:30:46.291398  890932 command_runner.go:130] >     "Prefix": "eth"
	I1208 00:30:46.291402  890932 command_runner.go:130] >   },
	I1208 00:30:46.291411  890932 command_runner.go:130] >   "config": {
	I1208 00:30:46.291415  890932 command_runner.go:130] >     "cdiSpecDirs": [
	I1208 00:30:46.291419  890932 command_runner.go:130] >       "/etc/cdi",
	I1208 00:30:46.291427  890932 command_runner.go:130] >       "/var/run/cdi"
	I1208 00:30:46.291432  890932 command_runner.go:130] >     ],
	I1208 00:30:46.291436  890932 command_runner.go:130] >     "cni": {
	I1208 00:30:46.291448  890932 command_runner.go:130] >       "binDir": "",
	I1208 00:30:46.291453  890932 command_runner.go:130] >       "binDirs": [
	I1208 00:30:46.291457  890932 command_runner.go:130] >         "/opt/cni/bin"
	I1208 00:30:46.291460  890932 command_runner.go:130] >       ],
	I1208 00:30:46.291464  890932 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1208 00:30:46.291468  890932 command_runner.go:130] >       "confTemplate": "",
	I1208 00:30:46.291472  890932 command_runner.go:130] >       "ipPref": "",
	I1208 00:30:46.291475  890932 command_runner.go:130] >       "maxConfNum": 1,
	I1208 00:30:46.291479  890932 command_runner.go:130] >       "setupSerially": false,
	I1208 00:30:46.291483  890932 command_runner.go:130] >       "useInternalLoopback": false
	I1208 00:30:46.291487  890932 command_runner.go:130] >     },
	I1208 00:30:46.291492  890932 command_runner.go:130] >     "containerd": {
	I1208 00:30:46.291499  890932 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1208 00:30:46.291504  890932 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1208 00:30:46.291509  890932 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1208 00:30:46.291515  890932 command_runner.go:130] >       "runtimes": {
	I1208 00:30:46.291519  890932 command_runner.go:130] >         "runc": {
	I1208 00:30:46.291527  890932 command_runner.go:130] >           "ContainerAnnotations": null,
	I1208 00:30:46.291533  890932 command_runner.go:130] >           "PodAnnotations": null,
	I1208 00:30:46.291545  890932 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1208 00:30:46.291550  890932 command_runner.go:130] >           "cgroupWritable": false,
	I1208 00:30:46.291554  890932 command_runner.go:130] >           "cniConfDir": "",
	I1208 00:30:46.291558  890932 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1208 00:30:46.291564  890932 command_runner.go:130] >           "io_type": "",
	I1208 00:30:46.291568  890932 command_runner.go:130] >           "options": {
	I1208 00:30:46.291576  890932 command_runner.go:130] >             "BinaryName": "",
	I1208 00:30:46.291580  890932 command_runner.go:130] >             "CriuImagePath": "",
	I1208 00:30:46.291588  890932 command_runner.go:130] >             "CriuWorkPath": "",
	I1208 00:30:46.291593  890932 command_runner.go:130] >             "IoGid": 0,
	I1208 00:30:46.291599  890932 command_runner.go:130] >             "IoUid": 0,
	I1208 00:30:46.291604  890932 command_runner.go:130] >             "NoNewKeyring": false,
	I1208 00:30:46.291615  890932 command_runner.go:130] >             "Root": "",
	I1208 00:30:46.291619  890932 command_runner.go:130] >             "ShimCgroup": "",
	I1208 00:30:46.291624  890932 command_runner.go:130] >             "SystemdCgroup": false
	I1208 00:30:46.291627  890932 command_runner.go:130] >           },
	I1208 00:30:46.291641  890932 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1208 00:30:46.291648  890932 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1208 00:30:46.291655  890932 command_runner.go:130] >           "runtimePath": "",
	I1208 00:30:46.291660  890932 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1208 00:30:46.291664  890932 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1208 00:30:46.291668  890932 command_runner.go:130] >           "snapshotter": ""
	I1208 00:30:46.291672  890932 command_runner.go:130] >         }
	I1208 00:30:46.291675  890932 command_runner.go:130] >       }
	I1208 00:30:46.291678  890932 command_runner.go:130] >     },
	I1208 00:30:46.291689  890932 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1208 00:30:46.291698  890932 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1208 00:30:46.291705  890932 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1208 00:30:46.291709  890932 command_runner.go:130] >     "disableApparmor": false,
	I1208 00:30:46.291714  890932 command_runner.go:130] >     "disableHugetlbController": true,
	I1208 00:30:46.291721  890932 command_runner.go:130] >     "disableProcMount": false,
	I1208 00:30:46.291726  890932 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1208 00:30:46.291730  890932 command_runner.go:130] >     "enableCDI": true,
	I1208 00:30:46.291740  890932 command_runner.go:130] >     "enableSelinux": false,
	I1208 00:30:46.291745  890932 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1208 00:30:46.291749  890932 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1208 00:30:46.291753  890932 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1208 00:30:46.291758  890932 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1208 00:30:46.291763  890932 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1208 00:30:46.291770  890932 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1208 00:30:46.291775  890932 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1208 00:30:46.291789  890932 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291798  890932 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1208 00:30:46.291803  890932 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1208 00:30:46.291810  890932 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1208 00:30:46.291819  890932 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1208 00:30:46.291823  890932 command_runner.go:130] >   },
	I1208 00:30:46.291827  890932 command_runner.go:130] >   "features": {
	I1208 00:30:46.291831  890932 command_runner.go:130] >     "supplemental_groups_policy": true
	I1208 00:30:46.291835  890932 command_runner.go:130] >   },
	I1208 00:30:46.291839  890932 command_runner.go:130] >   "golang": "go1.24.9",
	I1208 00:30:46.291850  890932 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291862  890932 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1208 00:30:46.291866  890932 command_runner.go:130] >   "runtimeHandlers": [
	I1208 00:30:46.291870  890932 command_runner.go:130] >     {
	I1208 00:30:46.291874  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291886  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291890  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291893  890932 command_runner.go:130] >       }
	I1208 00:30:46.291897  890932 command_runner.go:130] >     },
	I1208 00:30:46.291907  890932 command_runner.go:130] >     {
	I1208 00:30:46.291911  890932 command_runner.go:130] >       "features": {
	I1208 00:30:46.291916  890932 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1208 00:30:46.291919  890932 command_runner.go:130] >         "user_namespaces": true
	I1208 00:30:46.291922  890932 command_runner.go:130] >       },
	I1208 00:30:46.291926  890932 command_runner.go:130] >       "name": "runc"
	I1208 00:30:46.291930  890932 command_runner.go:130] >     }
	I1208 00:30:46.291939  890932 command_runner.go:130] >   ],
	I1208 00:30:46.291952  890932 command_runner.go:130] >   "status": {
	I1208 00:30:46.291955  890932 command_runner.go:130] >     "conditions": [
	I1208 00:30:46.291959  890932 command_runner.go:130] >       {
	I1208 00:30:46.291962  890932 command_runner.go:130] >         "message": "",
	I1208 00:30:46.291966  890932 command_runner.go:130] >         "reason": "",
	I1208 00:30:46.291973  890932 command_runner.go:130] >         "status": true,
	I1208 00:30:46.291983  890932 command_runner.go:130] >         "type": "RuntimeReady"
	I1208 00:30:46.291990  890932 command_runner.go:130] >       },
	I1208 00:30:46.291993  890932 command_runner.go:130] >       {
	I1208 00:30:46.292000  890932 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1208 00:30:46.292004  890932 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1208 00:30:46.292009  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292013  890932 command_runner.go:130] >         "type": "NetworkReady"
	I1208 00:30:46.292019  890932 command_runner.go:130] >       },
	I1208 00:30:46.292022  890932 command_runner.go:130] >       {
	I1208 00:30:46.292047  890932 command_runner.go:130] >         "message": "{\"io.containerd.deprecation/cgroup-v1\":\"The support for cgroup v1 is deprecated since containerd v2.2 and will be removed by no later than May 2029. Upgrade the host to use cgroup v2.\"}",
	I1208 00:30:46.292057  890932 command_runner.go:130] >         "reason": "ContainerdHasDeprecationWarnings",
	I1208 00:30:46.292063  890932 command_runner.go:130] >         "status": false,
	I1208 00:30:46.292068  890932 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1208 00:30:46.292074  890932 command_runner.go:130] >       }
	I1208 00:30:46.292077  890932 command_runner.go:130] >     ]
	I1208 00:30:46.292080  890932 command_runner.go:130] >   }
	I1208 00:30:46.292083  890932 command_runner.go:130] > }
	I1208 00:30:46.295037  890932 cni.go:84] Creating CNI manager for ""
	I1208 00:30:46.295064  890932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:30:46.295108  890932 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:30:46.295135  890932 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:30:46.295307  890932 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:30:46.295389  890932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:30:46.302776  890932 command_runner.go:130] > kubeadm
	I1208 00:30:46.302853  890932 command_runner.go:130] > kubectl
	I1208 00:30:46.302863  890932 command_runner.go:130] > kubelet
	I1208 00:30:46.303600  890932 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:30:46.303710  890932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:30:46.311760  890932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:30:46.325760  890932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:30:46.340134  890932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 00:30:46.359100  890932 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:30:46.362934  890932 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1208 00:30:46.363653  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:46.491856  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:47.343005  890932 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:30:47.343028  890932 certs.go:195] generating shared ca certs ...
	I1208 00:30:47.343054  890932 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:47.343240  890932 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:30:47.343312  890932 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:30:47.343326  890932 certs.go:257] generating profile certs ...
	I1208 00:30:47.343460  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:30:47.343536  890932 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:30:47.343590  890932 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:30:47.343612  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1208 00:30:47.343630  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1208 00:30:47.343655  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1208 00:30:47.343671  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1208 00:30:47.343691  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1208 00:30:47.343706  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1208 00:30:47.343719  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1208 00:30:47.343734  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1208 00:30:47.343800  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:30:47.343845  890932 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:30:47.343860  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:30:47.343888  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:30:47.343924  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:30:47.343960  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:30:47.344029  890932 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:30:47.344078  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.344096  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem -> /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.344112  890932 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.344800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:30:47.365934  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:30:47.392004  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:30:47.412283  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:30:47.434592  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:30:47.452176  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:30:47.471245  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:30:47.489925  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:30:47.511686  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:30:47.530800  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:30:47.549900  890932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:30:47.568360  890932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:30:47.581856  890932 ssh_runner.go:195] Run: openssl version
	I1208 00:30:47.588310  890932 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1208 00:30:47.588394  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.596457  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:30:47.604012  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607834  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607889  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.607941  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:30:47.648743  890932 command_runner.go:130] > 3ec20f2e
	I1208 00:30:47.649210  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:30:47.656730  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.664307  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:30:47.671943  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.675995  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676036  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.676087  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:30:47.716996  890932 command_runner.go:130] > b5213941
	I1208 00:30:47.717090  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:30:47.724719  890932 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.732215  890932 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:30:47.740036  890932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744030  890932 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744106  890932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.744186  890932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:30:47.784659  890932 command_runner.go:130] > 51391683
	I1208 00:30:47.785207  890932 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:30:47.792679  890932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796767  890932 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:30:47.796815  890932 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1208 00:30:47.796824  890932 command_runner.go:130] > Device: 259,1	Inode: 3390890     Links: 1
	I1208 00:30:47.796831  890932 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1208 00:30:47.796838  890932 command_runner.go:130] > Access: 2025-12-08 00:26:39.668848968 +0000
	I1208 00:30:47.796844  890932 command_runner.go:130] > Modify: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796849  890932 command_runner.go:130] > Change: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796854  890932 command_runner.go:130] >  Birth: 2025-12-08 00:22:35.857965266 +0000
	I1208 00:30:47.796956  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:30:47.837955  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.838424  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:30:47.879403  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.879847  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:30:47.921180  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.921679  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:30:47.962513  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:47.963017  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:30:48.007633  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.007748  890932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:30:48.052514  890932 command_runner.go:130] > Certificate will not expire
	I1208 00:30:48.052941  890932 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:30:48.053033  890932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:30:48.053097  890932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:30:48.081438  890932 cri.go:89] found id: ""
	I1208 00:30:48.081565  890932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:30:48.089271  890932 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1208 00:30:48.089305  890932 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1208 00:30:48.089313  890932 command_runner.go:130] > /var/lib/minikube/etcd:
	I1208 00:30:48.093391  890932 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:30:48.093432  890932 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:30:48.093495  890932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:30:48.102864  890932 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:30:48.103337  890932 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.103450  890932 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "functional-386544" cluster setting kubeconfig missing "functional-386544" context setting]
	I1208 00:30:48.103819  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.104260  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.104413  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.105009  890932 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 00:30:48.105030  890932 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 00:30:48.105036  890932 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 00:30:48.105041  890932 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 00:30:48.105047  890932 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 00:30:48.105105  890932 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1208 00:30:48.105315  890932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:30:48.117774  890932 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1208 00:30:48.117857  890932 kubeadm.go:602] duration metric: took 24.417752ms to restartPrimaryControlPlane
	I1208 00:30:48.117881  890932 kubeadm.go:403] duration metric: took 64.945899ms to StartCluster
	I1208 00:30:48.117925  890932 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.118025  890932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.118797  890932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:30:48.119107  890932 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 00:30:48.119487  890932 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 00:30:48.119575  890932 addons.go:70] Setting storage-provisioner=true in profile "functional-386544"
	I1208 00:30:48.119600  890932 addons.go:239] Setting addon storage-provisioner=true in "functional-386544"
	I1208 00:30:48.119601  890932 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:30:48.119630  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.120591  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.119636  890932 addons.go:70] Setting default-storageclass=true in profile "functional-386544"
	I1208 00:30:48.120910  890932 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-386544"
	I1208 00:30:48.121235  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.122185  890932 out.go:179] * Verifying Kubernetes components...
	I1208 00:30:48.124860  890932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:30:48.159125  890932 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:30:48.159302  890932 kapi.go:59] client config for functional-386544: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 00:30:48.159592  890932 addons.go:239] Setting addon default-storageclass=true in "functional-386544"
	I1208 00:30:48.159620  890932 host.go:66] Checking if "functional-386544" exists ...
	I1208 00:30:48.160038  890932 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:30:48.170516  890932 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 00:30:48.173762  890932 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.173784  890932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 00:30:48.173857  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.210938  890932 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:48.210964  890932 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 00:30:48.211031  890932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:30:48.228251  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.254642  890932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:30:48.338576  890932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:30:48.365732  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:48.388846  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.094190  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094240  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094289  890932 retry.go:31] will retry after 221.572731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.094347  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094353  890932 retry.go:31] will retry after 127.29639ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.094558  890932 node_ready.go:35] waiting up to 6m0s for node "functional-386544" to be "Ready" ...
	I1208 00:30:49.094733  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.094831  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.222592  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.293397  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.293520  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.293548  890932 retry.go:31] will retry after 191.192714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.316617  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.385398  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.389149  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.389192  890932 retry.go:31] will retry after 221.019406ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.485459  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:49.544915  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.548575  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.548650  890932 retry.go:31] will retry after 430.912171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.594843  890932 type.go:168] "Request Body" body=""
	I1208 00:30:49.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:49.595415  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:49.610614  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:49.669839  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:49.669884  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.669904  890932 retry.go:31] will retry after 602.088887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:49.980400  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:50.054076  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.057921  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.057957  890932 retry.go:31] will retry after 1.251170732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.095196  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.095305  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:50.273088  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:50.333799  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:50.333898  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.333941  890932 retry.go:31] will retry after 841.525831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:50.595581  890932 type.go:168] "Request Body" body=""
	I1208 00:30:50.595651  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:50.595949  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:51.095803  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.096238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:51.096319  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:51.176619  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:51.234663  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.238362  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.238405  890932 retry.go:31] will retry after 1.674228806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.309626  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:51.370041  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:51.373759  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.373793  890932 retry.go:31] will retry after 1.825797421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:51.595251  890932 type.go:168] "Request Body" body=""
	I1208 00:30:51.595336  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:51.595859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.095576  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.095656  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.096001  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:30:52.594894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:52.595585  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:52.912970  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:52.971340  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:52.975027  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:52.975063  890932 retry.go:31] will retry after 2.158822419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.095343  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.095426  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.095834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:53.200381  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:53.262558  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:53.262597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.262618  890932 retry.go:31] will retry after 2.117348765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:53.595941  890932 type.go:168] "Request Body" body=""
	I1208 00:30:53.596038  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:53.596315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:53.596377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:54.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.095321  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:54.595354  890932 type.go:168] "Request Body" body=""
	I1208 00:30:54.595475  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:54.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.097427  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.097684  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.097999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:55.134417  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:55.207147  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.207186  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.207211  890932 retry.go:31] will retry after 1.888454669s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.380583  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:55.442228  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:55.442305  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.442354  890932 retry.go:31] will retry after 2.144073799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:55.595860  890932 type.go:168] "Request Body" body=""
	I1208 00:30:55.595937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:55.596276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:56.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.095041  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:56.095552  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:56.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:30:56.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:56.595189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.094913  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.094995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.096590  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:57.159346  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.159395  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.159419  890932 retry.go:31] will retry after 2.451052222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.586888  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:30:57.595329  890932 type.go:168] "Request Body" body=""
	I1208 00:30:57.595647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:57.595917  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:57.644195  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:57.648428  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:57.648466  890932 retry.go:31] will retry after 6.27239315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:58.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.095132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:58.595202  890932 type.go:168] "Request Body" body=""
	I1208 00:30:58.595277  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:58.595673  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:30:58.595737  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:30:59.095382  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.095474  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.095817  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.595497  890932 type.go:168] "Request Body" body=""
	I1208 00:30:59.595641  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:30:59.595962  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:30:59.611138  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:30:59.678142  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:30:59.678192  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:30:59.678217  890932 retry.go:31] will retry after 3.668002843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:00.095797  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.095883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.096216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:00.594886  890932 type.go:168] "Request Body" body=""
	I1208 00:31:00.594963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:00.595392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:01.095660  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.095757  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.096070  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:01.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:01.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:31:01.594889  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:01.595445  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.094968  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:02.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:02.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:02.595407  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:03.346685  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:03.431951  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.432026  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.432051  890932 retry.go:31] will retry after 7.871453146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.595808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:03.595982  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:03.596320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:03.596392  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:03.921995  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:03.979614  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:03.984229  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:03.984264  890932 retry.go:31] will retry after 6.338984785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:04.095500  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.095579  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.095881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:04.595749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:04.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:04.596230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.094969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:05.594775  890932 type.go:168] "Request Body" body=""
	I1208 00:31:05.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:05.595280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:06.094874  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:06.095343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:06.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:31:06.594931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:06.596121  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:07.095769  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.095852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.096129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:07.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:31:07.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:07.595312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:08.594744  890932 type.go:168] "Request Body" body=""
	I1208 00:31:08.594830  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:08.595101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:08.595154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:09.094875  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:09.594884  890932 type.go:168] "Request Body" body=""
	I1208 00:31:09.594974  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:09.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.095326  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.095417  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.095739  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:10.324305  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:10.384998  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:10.385051  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.385071  890932 retry.go:31] will retry after 7.782157506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:10.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:31:10.595548  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:10.595897  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:10.595950  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:11.095753  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.095835  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:11.304608  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:11.367180  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:11.367234  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.367256  890932 retry.go:31] will retry after 13.123466664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:11.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:31:11.595455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:11.595807  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.095614  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.095694  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.095989  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:12.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:31:12.594814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:12.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:13.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.094906  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:13.095366  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:13.595620  890932 type.go:168] "Request Body" body=""
	I1208 00:31:13.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:13.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.094811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.094918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.095230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:14.594815  890932 type.go:168] "Request Body" body=""
	I1208 00:31:14.594881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:14.595183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.094943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.095289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:15.594876  890932 type.go:168] "Request Body" body=""
	I1208 00:31:15.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:15.595270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:15.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:16.094820  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.094894  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.095164  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:16.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:31:16.594908  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:16.595244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.095054  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.095138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.095471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:17.595816  890932 type.go:168] "Request Body" body=""
	I1208 00:31:17.595940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:17.596241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:17.596293  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:18.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.094955  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:18.168028  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:18.232113  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:18.232150  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.232169  890932 retry.go:31] will retry after 8.094581729s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:18.595690  890932 type.go:168] "Request Body" body=""
	I1208 00:31:18.595775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:18.596183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.095628  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.096011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:19.594718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:19.594802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:19.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:20.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.095232  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:20.095311  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:20.595598  890932 type.go:168] "Request Body" body=""
	I1208 00:31:20.595793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:20.596357  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.095040  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:21.594912  890932 type.go:168] "Request Body" body=""
	I1208 00:31:21.595011  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:21.595362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.094750  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.094826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.095087  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:22.594811  890932 type.go:168] "Request Body" body=""
	I1208 00:31:22.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:22.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:22.595315  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:23.094999  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.095088  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.095463  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:23.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:23.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:23.595136  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.094856  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:24.490869  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:24.557459  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:24.557507  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.557527  890932 retry.go:31] will retry after 14.933128441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:24.595841  890932 type.go:168] "Request Body" body=""
	I1208 00:31:24.595922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:24.596313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:24.596367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:25.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.095113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:25.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:31:25.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:25.595217  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.094904  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.095360  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:26.327725  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:26.388171  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:26.388210  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.388230  890932 retry.go:31] will retry after 17.607962094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:26.595498  890932 type.go:168] "Request Body" body=""
	I1208 00:31:26.595632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:26.595892  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:27.095752  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.095851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.096189  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:27.096258  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:27.594738  890932 type.go:168] "Request Body" body=""
	I1208 00:31:27.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:27.595158  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.095672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.096073  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:28.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:31:28.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:28.595257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:29.595836  890932 type.go:168] "Request Body" body=""
	I1208 00:31:29.595984  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:29.596331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:29.596385  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:30.095156  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.095252  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.095627  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:30.595556  890932 type.go:168] "Request Body" body=""
	I1208 00:31:30.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:30.596442  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.094732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.094808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.095102  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:31.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:31:31.594886  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:31.595210  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:32.094828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.095216  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:32.095266  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:32.595760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:32.595841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:32.596354  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.095264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:33.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:31:33.594878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:33.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:34.094814  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.095244  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:34.095287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:34.594940  890932 type.go:168] "Request Body" body=""
	I1208 00:31:34.595021  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:34.595365  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.094942  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.095029  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:35.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:31:35.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:35.595132  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:36.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.094904  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.095255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:36.095316  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:36.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:36.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:36.595276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.095623  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.095696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.095973  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:37.594749  890932 type.go:168] "Request Body" body=""
	I1208 00:31:37.594850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:37.595227  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:38.094987  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.095112  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:38.095555  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:38.595474  890932 type.go:168] "Request Body" body=""
	I1208 00:31:38.595556  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:38.595831  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.095726  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.095806  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.096148  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:39.491741  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:39.568327  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:39.568372  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.568394  890932 retry.go:31] will retry after 16.95217324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:39.595718  890932 type.go:168] "Request Body" body=""
	I1208 00:31:39.596632  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:39.597031  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:40.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.095785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:40.096128  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:40.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:31:40.594872  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:40.595175  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.094806  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.094893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.095209  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:41.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:41.595791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:41.596479  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.094922  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.095018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.095545  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:42.595373  890932 type.go:168] "Request Body" body=""
	I1208 00:31:42.595463  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:42.596518  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:31:42.596581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:43.095290  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.095363  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.095661  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.595732  890932 type.go:168] "Request Body" body=""
	I1208 00:31:43.595812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:43.596157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:43.996743  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:31:44.061795  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:44.065597  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.065636  890932 retry.go:31] will retry after 36.030777087s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 00:31:44.094709  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.095134  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:44.595619  890932 type.go:168] "Request Body" body=""
	I1208 00:31:44.595689  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:44.596188  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:45.095192  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.095284  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:45.095814  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:45.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:31:45.595664  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:45.596700  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:46.095471  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.095564  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.095854  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:46.595665  890932 type.go:168] "Request Body" body=""
	I1208 00:31:46.595741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:46.596605  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:47.095443  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.095832  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:47.095881  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:47.595397  890932 type.go:168] "Request Body" body=""
	I1208 00:31:47.595480  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:47.595753  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.095688  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.095797  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.096203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:48.594869  890932 type.go:168] "Request Body" body=""
	I1208 00:31:48.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:48.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:49.095593  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.095675  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.096008  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:49.096067  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:49.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:31:49.594865  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:49.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.094833  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:50.595672  890932 type.go:168] "Request Body" body=""
	I1208 00:31:50.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:50.596966  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1208 00:31:51.095757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.095841  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.096183  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:51.096238  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:51.594921  890932 type.go:168] "Request Body" body=""
	I1208 00:31:51.595014  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:51.595361  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.094793  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.095231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:52.594828  890932 type.go:168] "Request Body" body=""
	I1208 00:31:52.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:52.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.094823  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:53.594757  890932 type.go:168] "Request Body" body=""
	I1208 00:31:53.594827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:53.595090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:53.595131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:54.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.095337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:54.595028  890932 type.go:168] "Request Body" body=""
	I1208 00:31:54.595111  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:54.595443  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.095154  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.095659  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:55.595576  890932 type.go:168] "Request Body" body=""
	I1208 00:31:55.595659  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:55.595995  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:55.596040  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:56.094916  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:56.520835  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 00:31:56.580569  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580606  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:31:56.580706  890932 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:31:56.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:31:56.595785  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:56.596127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.094846  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:57.594959  890932 type.go:168] "Request Body" body=""
	I1208 00:31:57.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:57.595375  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:58.095719  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.095802  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.096233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:31:58.096313  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:31:58.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:31:58.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:58.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.095006  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.095098  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.095434  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:31:59.594763  890932 type.go:168] "Request Body" body=""
	I1208 00:31:59.594848  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:31:59.595114  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.095240  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.095594  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:00.595468  890932 type.go:168] "Request Body" body=""
	I1208 00:32:00.595570  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:00.596011  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:00.596082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.094962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:01.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:01.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:01.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.096010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:02.595794  890932 type.go:168] "Request Body" body=""
	I1208 00:32:02.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:02.596311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:02.596371  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:03.095057  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.095145  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.095500  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:03.595367  890932 type.go:168] "Request Body" body=""
	I1208 00:32:03.595442  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:03.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.095519  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.095642  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.096000  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:04.595726  890932 type.go:168] "Request Body" body=""
	I1208 00:32:04.595814  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:04.596263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:05.095616  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.095688  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.095960  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:05.096006  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:05.595742  890932 type.go:168] "Request Body" body=""
	I1208 00:32:05.595817  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:05.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:06.595654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:06.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:06.596003  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:07.095781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.095861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.096199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:07.096254  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:07.594824  890932 type.go:168] "Request Body" body=""
	I1208 00:32:07.594910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:07.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.094781  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.095147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:08.595140  890932 type.go:168] "Request Body" body=""
	I1208 00:32:08.595213  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:08.595560  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.095144  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.095234  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.095578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:09.595126  890932 type.go:168] "Request Body" body=""
	I1208 00:32:09.595198  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:09.595458  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:09.595499  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:10.095157  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.095251  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.095657  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:10.595220  890932 type.go:168] "Request Body" body=""
	I1208 00:32:10.595297  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:10.595648  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.095385  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.095455  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.095752  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:11.595492  890932 type.go:168] "Request Body" body=""
	I1208 00:32:11.595574  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:11.595922  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:11.595978  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:12.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.095855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:12.594787  890932 type.go:168] "Request Body" body=""
	I1208 00:32:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:12.595182  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.094907  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.094987  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.095332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:13.595577  890932 type.go:168] "Request Body" body=""
	I1208 00:32:13.595657  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:13.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:13.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:14.095571  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.095649  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.095941  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:14.595772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:14.595853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:14.596231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.094898  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.095334  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:15.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:32:15.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:15.595180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:16.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:16.095326  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:16.595008  890932 type.go:168] "Request Body" body=""
	I1208 00:32:16.595092  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:16.595453  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.094788  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.095049  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:17.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:17.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:17.595151  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.094845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:18.595685  890932 type.go:168] "Request Body" body=""
	I1208 00:32:18.595803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:18.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:18.596225  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:19.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.095319  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:19.594891  890932 type.go:168] "Request Body" body=""
	I1208 00:32:19.594970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:19.595320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.094805  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.095201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:20.097611  890932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 00:32:20.173666  890932 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173721  890932 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 00:32:20.173816  890932 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 00:32:20.177110  890932 out.go:179] * Enabled addons: 
	I1208 00:32:20.180584  890932 addons.go:530] duration metric: took 1m32.061097112s for enable addons: enabled=[]
	I1208 00:32:20.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:20.595353  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:20.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:21.095445  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:21.095926  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:21.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:32:21.595732  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:21.596006  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.094730  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.094810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.095155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:22.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:32:22.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:22.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:23.095654  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.095734  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.096034  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:23.096082  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:23.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:32:23.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:23.595243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.094924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:24.595670  890932 type.go:168] "Request Body" body=""
	I1208 00:32:24.595754  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:24.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:25.095811  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.095896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.096308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:25.096381  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:25.594842  890932 type.go:168] "Request Body" body=""
	I1208 00:32:25.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:25.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.095626  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.095702  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.095977  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:26.595770  890932 type.go:168] "Request Body" body=""
	I1208 00:32:26.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:26.596206  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.095271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:27.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:32:27.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:27.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:27.595194  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:28.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.095355  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:28.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:32:28.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:28.595399  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.095084  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:29.595122  890932 type.go:168] "Request Body" body=""
	I1208 00:32:29.595197  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:29.595539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:29.595597  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:30.095160  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.095253  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:30.595339  890932 type.go:168] "Request Body" body=""
	I1208 00:32:30.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:30.595701  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.095621  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.095959  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:31.595634  890932 type.go:168] "Request Body" body=""
	I1208 00:32:31.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:31.596065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:31.596120  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:32.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:32.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:32.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:32.595231  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.094941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:33.594792  890932 type.go:168] "Request Body" body=""
	I1208 00:32:33.594866  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:33.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:34.094871  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:34.095348  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:34.595041  890932 type.go:168] "Request Body" body=""
	I1208 00:32:34.595122  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:34.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.095733  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.096082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:35.594748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:35.594826  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:35.595179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:36.595680  890932 type.go:168] "Request Body" body=""
	I1208 00:32:36.595807  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:36.596074  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:36.596131  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:37.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.094901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:37.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:32:37.594902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:37.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.095188  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.095665  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:38.594759  890932 type.go:168] "Request Body" body=""
	I1208 00:32:38.594842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:38.595165  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:39.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.095320  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:39.095377  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:39.594776  890932 type.go:168] "Request Body" body=""
	I1208 00:32:39.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:39.595118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.095374  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:40.595107  890932 type.go:168] "Request Body" body=""
	I1208 00:32:40.595184  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:40.595524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.094737  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.094813  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:41.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:32:41.594877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:41.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:41.595246  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:42.094951  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.095448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:42.595773  890932 type.go:168] "Request Body" body=""
	I1208 00:32:42.595847  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:42.596153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.094911  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.095007  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.095748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:43.594741  890932 type.go:168] "Request Body" body=""
	I1208 00:32:43.594832  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:43.596090  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	W1208 00:32:43.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:44.095588  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.095673  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.095930  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:44.595731  890932 type.go:168] "Request Body" body=""
	I1208 00:32:44.595809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:44.596147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.094947  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.095058  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.095377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:45.595628  890932 type.go:168] "Request Body" body=""
	I1208 00:32:45.595713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:45.595984  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:46.095846  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.095977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.096455  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:46.096521  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:46.595187  890932 type.go:168] "Request Body" body=""
	I1208 00:32:46.595275  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:46.595599  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.095341  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.095628  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:47.595415  890932 type.go:168] "Request Body" body=""
	I1208 00:32:47.595489  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:47.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.095646  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.096086  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:48.595037  890932 type.go:168] "Request Body" body=""
	I1208 00:32:48.595138  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:48.595519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:48.595573  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:49.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.094988  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.095539  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:49.595272  890932 type.go:168] "Request Body" body=""
	I1208 00:32:49.595369  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:49.595785  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.095604  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.095687  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:50.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:32:50.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:50.595195  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:51.094864  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.094944  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:51.095341  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:51.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:32:51.594882  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:51.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:52.594851  890932 type.go:168] "Request Body" body=""
	I1208 00:32:52.594928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:52.595377  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:53.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.096075  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:53.096116  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:53.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:32:53.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:53.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.095368  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.095450  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.095808  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:54.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:32:54.595724  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:54.596055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.094748  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.094827  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:32:55.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:55.595264  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:55.595318  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:56.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.095731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.096035  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:56.594732  890932 type.go:168] "Request Body" body=""
	I1208 00:32:56.594809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:56.595181  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.094761  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.094840  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:57.595660  890932 type.go:168] "Request Body" body=""
	I1208 00:32:57.595745  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:57.596013  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:32:57.596066  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:32:58.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.094869  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.095204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:58.595193  890932 type.go:168] "Request Body" body=""
	I1208 00:32:58.595274  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:58.595658  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.095380  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.095459  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.095744  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:32:59.595525  890932 type.go:168] "Request Body" body=""
	I1208 00:32:59.595604  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:32:59.595972  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:00.094758  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.094843  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:00.095388  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:00.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:00.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:00.595177  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.094927  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.095247  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:01.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:01.594943  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:01.595298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.094875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:02.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:02.594959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:02.595287  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:02.595345  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:03.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.095115  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.095495  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:03.595322  890932 type.go:168] "Request Body" body=""
	I1208 00:33:03.595402  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:03.595670  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.095474  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.095555  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.095896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:04.595687  890932 type.go:168] "Request Body" body=""
	I1208 00:33:04.595771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:04.596109  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:04.596164  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:05.095698  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.095821  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.096157  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:05.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:05.594941  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:06.595621  890932 type.go:168] "Request Body" body=""
	I1208 00:33:06.595700  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:06.595983  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:07.095776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.095871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.096215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:07.096271  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:07.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:07.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:07.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.094792  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.094883  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.095200  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:08.594845  890932 type.go:168] "Request Body" body=""
	I1208 00:33:08.594923  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:08.595258  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.094834  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.094921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.095276  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:09.595681  890932 type.go:168] "Request Body" body=""
	I1208 00:33:09.595759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:09.596030  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:09.596071  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:10.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.095180  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:10.594772  890932 type.go:168] "Request Body" body=""
	I1208 00:33:10.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:10.595171  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.094783  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.095169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:11.594859  890932 type.go:168] "Request Body" body=""
	I1208 00:33:11.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:11.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:12.095394  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:12.595611  890932 type.go:168] "Request Body" body=""
	I1208 00:33:12.595686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:12.595968  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.095748  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.095829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.096220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:13.594964  890932 type.go:168] "Request Body" body=""
	I1208 00:33:13.595042  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:13.595409  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:14.095085  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.095158  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.095492  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:14.095548  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:14.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:33:14.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:14.595275  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.094998  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.095079  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.095428  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:15.594851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:15.595113  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.094970  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.095424  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:16.595117  890932 type.go:168] "Request Body" body=""
	I1208 00:33:16.595199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:16.595552  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:16.595612  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:17.095280  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.095347  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.095662  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:17.595238  890932 type.go:168] "Request Body" body=""
	I1208 00:33:17.595324  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:17.595678  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.095532  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.095611  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.095982  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:18.594760  890932 type.go:168] "Request Body" body=""
	I1208 00:33:18.594829  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:18.595098  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:19.094863  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.095323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:19.095387  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:19.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:19.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:19.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.095680  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.095777  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:20.594771  890932 type.go:168] "Request Body" body=""
	I1208 00:33:20.594856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:20.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.094914  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.095000  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.095330  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:21.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:33:21.594854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:21.595147  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:21.595196  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:22.094883  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.094961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.095312  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:22.594844  890932 type.go:168] "Request Body" body=""
	I1208 00:33:22.594926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:22.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.095693  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.095771  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:23.595065  890932 type.go:168] "Request Body" body=""
	I1208 00:33:23.595151  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:23.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:23.595587  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:24.095271  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.095360  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.095734  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:24.595131  890932 type.go:168] "Request Body" body=""
	I1208 00:33:24.595202  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:24.595547  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.095305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:25.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:33:25.595099  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:25.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:26.095122  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.095199  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.095487  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:26.095533  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:26.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:33:26.594948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:26.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.095043  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.095128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.095472  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:27.595145  890932 type.go:168] "Request Body" body=""
	I1208 00:33:27.595211  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:27.595478  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:28.095182  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.095261  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.095626  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:28.095683  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:28.595637  890932 type.go:168] "Request Body" body=""
	I1208 00:33:28.595718  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:28.596082  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.096085  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:29.594793  890932 type.go:168] "Request Body" body=""
	I1208 00:33:29.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:29.595201  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.094893  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.095390  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:30.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:33:30.594853  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:30.595110  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:30.595152  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:31.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.094935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.095282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:31.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:31.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:31.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:32.594832  890932 type.go:168] "Request Body" body=""
	I1208 00:33:32.594907  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:32.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:32.595282  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:33.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.094953  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.095310  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:33.594795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:33.594873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:33.595155  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.094850  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.095295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:34.594897  890932 type.go:168] "Request Body" body=""
	I1208 00:33:34.594986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:34.595405  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:34.595460  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:35.094996  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.095074  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.095402  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:35.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:33:35.595458  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:35.596025  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.096170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:36.595656  890932 type.go:168] "Request Body" body=""
	I1208 00:33:36.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:36.596020  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:36.596064  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:37.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:37.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:33:37.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:37.595281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.095648  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:38.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:33:38.594945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:38.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:39.095004  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:39.095492  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:39.595152  890932 type.go:168] "Request Body" body=""
	I1208 00:33:39.595232  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:39.595511  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.095291  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:40.594994  890932 type.go:168] "Request Body" body=""
	I1208 00:33:40.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:40.595449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:41.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.095710  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.095987  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:41.096029  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:41.595774  890932 type.go:168] "Request Body" body=""
	I1208 00:33:41.595854  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:41.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.094945  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.095040  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.095447  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:42.594810  890932 type.go:168] "Request Body" body=""
	I1208 00:33:42.594880  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:42.595143  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.094847  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.095281  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:43.595149  890932 type.go:168] "Request Body" body=""
	I1208 00:33:43.595226  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:43.595578  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:43.595639  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:44.095700  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.095775  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.096055  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:44.594826  890932 type.go:168] "Request Body" body=""
	I1208 00:33:44.594909  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:44.595246  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.094986  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.095358  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:45.595704  890932 type.go:168] "Request Body" body=""
	I1208 00:33:45.595779  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:45.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:45.596188  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:46.094853  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.095300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:46.594868  890932 type.go:168] "Request Body" body=""
	I1208 00:33:46.594951  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:46.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.094796  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.094870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.095149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:47.594854  890932 type.go:168] "Request Body" body=""
	I1208 00:33:47.594930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:47.595297  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:48.095000  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.095085  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.095460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:48.095511  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:48.595390  890932 type.go:168] "Request Body" body=""
	I1208 00:33:48.595476  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:48.595748  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.095572  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.095647  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.095999  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:49.595785  890932 type.go:168] "Request Body" body=""
	I1208 00:33:49.595874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:49.596224  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.094795  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.095203  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:50.594890  890932 type.go:168] "Request Body" body=""
	I1208 00:33:50.594973  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:50.595313  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:50.595368  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:51.094876  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.095346  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:51.595652  890932 type.go:168] "Request Body" body=""
	I1208 00:33:51.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:51.596078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.095724  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.095805  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.096192  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:52.594926  890932 type.go:168] "Request Body" body=""
	I1208 00:33:52.595020  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:52.595378  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:52.595433  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:53.094786  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:53.594865  890932 type.go:168] "Request Body" body=""
	I1208 00:33:53.594946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:53.595299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.094881  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.094965  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:54.595582  890932 type.go:168] "Request Body" body=""
	I1208 00:33:54.595660  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:54.595948  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:54.595991  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:55.095773  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.095890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.096222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:55.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:33:55.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:55.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.095686  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.095975  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:56.595757  890932 type.go:168] "Request Body" body=""
	I1208 00:33:56.595833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:56.596223  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:56.596285  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:57.094855  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.094930  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.095265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:57.595601  890932 type.go:168] "Request Body" body=""
	I1208 00:33:57.595670  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:57.595954  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.095735  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.095811  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.096159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:58.594840  890932 type.go:168] "Request Body" body=""
	I1208 00:33:58.594919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:58.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:33:59.095600  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.095680  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.095963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:33:59.096015  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:33:59.595776  890932 type.go:168] "Request Body" body=""
	I1208 00:33:59.595860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:33:59.596187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.094948  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.095044  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:00.594807  890932 type.go:168] "Request Body" body=""
	I1208 00:34:00.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:00.595187  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.094870  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:01.594909  890932 type.go:168] "Request Body" body=""
	I1208 00:34:01.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:01.595385  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:01.595446  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:02.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.095145  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:02.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:02.594938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:02.595302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.095022  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.095477  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:03.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:34:03.595437  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:03.595711  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:03.595753  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:04.095511  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.095589  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.095964  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:04.595805  890932 type.go:168] "Request Body" body=""
	I1208 00:34:04.595893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:04.596256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.094816  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.094892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.095280  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:05.595023  890932 type.go:168] "Request Body" body=""
	I1208 00:34:05.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:05.595525  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:06.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:06.095367  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:06.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:06.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:06.595230  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.095222  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:07.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:07.594993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:07.595353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:08.095670  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.095741  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.096065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:08.096123  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:08.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:34:08.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:08.595235  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.094980  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:09.595609  890932 type.go:168] "Request Body" body=""
	I1208 00:34:09.595691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:09.595986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.095220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:10.594928  890932 type.go:168] "Request Body" body=""
	I1208 00:34:10.595018  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:10.595327  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:10.595376  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:11.094812  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.094900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:11.594874  890932 type.go:168] "Request Body" body=""
	I1208 00:34:11.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:11.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.095242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:12.594780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:12.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:12.595130  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:13.094818  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.094897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.095245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:13.095308  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:13.594998  890932 type.go:168] "Request Body" body=""
	I1208 00:34:13.595102  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:13.595450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.095713  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.095782  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.096067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:14.595722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:14.595804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:14.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:15.094925  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.095362  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:15.095419  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:15.595024  890932 type.go:168] "Request Body" body=""
	I1208 00:34:15.595091  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:15.595369  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.094880  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.095285  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:16.595018  890932 type.go:168] "Request Body" body=""
	I1208 00:34:16.595096  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:16.595400  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:17.095070  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.095143  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.095425  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:17.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:17.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:17.594950  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:17.595419  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.094891  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.094971  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:18.595365  890932 type.go:168] "Request Body" body=""
	I1208 00:34:18.595444  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:18.595738  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:19.095230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.095306  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.095655  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:19.095709  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:19.595477  890932 type.go:168] "Request Body" body=""
	I1208 00:34:19.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:19.595895  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.095714  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.095809  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.096185  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:20.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:34:20.594929  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:20.595277  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.094919  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:21.595647  890932 type.go:168] "Request Body" body=""
	I1208 00:34:21.595727  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:21.596033  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:21.596080  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:22.094759  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.094856  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:22.594975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:22.595067  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:22.595475  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.094712  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.094791  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.095065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:23.594862  890932 type.go:168] "Request Body" body=""
	I1208 00:34:23.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:23.595295  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:24.094995  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.095075  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.095444  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:24.095501  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:24.595780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:24.595858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:24.596186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.094765  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.094851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:25.594923  890932 type.go:168] "Request Body" body=""
	I1208 00:34:25.595001  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:25.595307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.095156  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:26.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:26.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:26.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:26.595299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:27.094975  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.095380  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:27.594800  890932 type.go:168] "Request Body" body=""
	I1208 00:34:27.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:27.595139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.095238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:28.595230  890932 type.go:168] "Request Body" body=""
	I1208 00:34:28.595311  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:28.595664  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:28.595719  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:29.095432  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.095508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.095787  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:29.595520  890932 type.go:168] "Request Body" body=""
	I1208 00:34:29.595592  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:29.595939  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.095902  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.096081  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.096554  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:30.595239  890932 type.go:168] "Request Body" body=""
	I1208 00:34:30.595307  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:30.595584  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:31.095490  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.095572  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.095910  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:31.095974  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:31.595725  890932 type.go:168] "Request Body" body=""
	I1208 00:34:31.595808  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:31.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.095639  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.095709  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:32.595814  890932 type.go:168] "Request Body" body=""
	I1208 00:34:32.595902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:32.596323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.094887  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.094963  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.095317  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:33.595336  890932 type.go:168] "Request Body" body=""
	I1208 00:34:33.595409  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:33.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:33.595717  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:34.095551  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.095629  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.095979  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:34.595790  890932 type.go:168] "Request Body" body=""
	I1208 00:34:34.595868  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:34.596199  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.094862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.095168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:35.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:34:35.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:35.595304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:36.094868  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.094956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:36.095379  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:36.595585  890932 type.go:168] "Request Body" body=""
	I1208 00:34:36.595663  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:36.595963  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.095729  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.095810  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.096161  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:37.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:37.594967  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:37.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:38.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.095728  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.096015  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:38.096058  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:38.595114  890932 type.go:168] "Request Body" body=""
	I1208 00:34:38.595196  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:38.595553  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.095363  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.095447  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.095793  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:39.595427  890932 type.go:168] "Request Body" body=""
	I1208 00:34:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:39.595881  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:40.095722  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.095803  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.096152  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:40.096206  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:40.594823  890932 type.go:168] "Request Body" body=""
	I1208 00:34:40.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:40.595221  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.094789  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.094864  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:41.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:34:41.594936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:41.595262  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.095093  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.095185  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.095576  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:42.595717  890932 type.go:168] "Request Body" body=""
	I1208 00:34:42.595787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:42.596105  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:42.596148  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:43.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.094873  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:43.594881  890932 type.go:168] "Request Body" body=""
	I1208 00:34:43.594956  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:43.595259  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.094780  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:44.594889  890932 type.go:168] "Request Body" body=""
	I1208 00:34:44.594969  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:44.595448  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:45.094892  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.095006  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.095381  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:45.095448  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:45.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:45.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:45.595342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.094959  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.095359  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:46.594941  890932 type.go:168] "Request Body" body=""
	I1208 00:34:46.595022  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:46.595430  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.094744  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.094819  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.095101  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:47.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:34:47.594892  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:47.595288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:47.595343  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:48.095015  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.095104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.095449  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:48.595544  890932 type.go:168] "Request Body" body=""
	I1208 00:34:48.595623  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:48.595896  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.095113  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.095208  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.095687  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:49.595013  890932 type.go:168] "Request Body" body=""
	I1208 00:34:49.595100  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:49.595440  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:49.595488  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:50.095273  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.095418  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.095709  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:50.595564  890932 type.go:168] "Request Body" body=""
	I1208 00:34:50.595644  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:50.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.094776  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.094860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.095248  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:51.595659  890932 type.go:168] "Request Body" body=""
	I1208 00:34:51.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:51.596066  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:51.596155  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:52.094869  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.094949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.095311  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:52.595055  890932 type.go:168] "Request Body" body=""
	I1208 00:34:52.595136  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:52.595494  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.094787  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.095068  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:53.595063  890932 type.go:168] "Request Body" body=""
	I1208 00:34:53.595142  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:53.595460  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:54.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.094945  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:54.095354  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:54.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:34:54.595721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:54.596067  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.094878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.095211  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:55.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:34:55.594915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:55.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:56.095629  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.095704  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.096027  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:56.096075  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:56.594762  890932 type.go:168] "Request Body" body=""
	I1208 00:34:56.594845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:56.595196  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.094921  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.095004  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.095324  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:57.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:34:57.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:57.595149  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.094839  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.094922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:58.595381  890932 type.go:168] "Request Body" body=""
	I1208 00:34:58.595460  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:58.595812  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:34:58.595863  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:34:59.095337  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.095413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.095693  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:34:59.595446  890932 type.go:168] "Request Body" body=""
	I1208 00:34:59.595531  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:34:59.595875  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.095754  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.096197  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:00.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:00.594924  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:00.595251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:01.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.095251  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:01.095307  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:01.594817  890932 type.go:168] "Request Body" body=""
	I1208 00:35:01.594896  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:01.595215  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.094762  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.094836  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.095122  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:02.594839  890932 type.go:168] "Request Body" body=""
	I1208 00:35:02.594922  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:02.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.094822  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.094902  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.095237  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:03.595638  890932 type.go:168] "Request Body" body=""
	I1208 00:35:03.595707  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:03.595996  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:03.596048  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:04.094798  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.094881  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.095254  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:04.594969  890932 type.go:168] "Request Body" body=""
	I1208 00:35:04.595053  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:04.595398  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.095017  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.095353  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:05.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:05.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:05.595274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:06.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.095087  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.095436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:06.095490  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:06.594782  890932 type.go:168] "Request Body" body=""
	I1208 00:35:06.594852  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:06.595138  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.094936  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.095278  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:07.595328  890932 type.go:168] "Request Body" body=""
	I1208 00:35:07.595416  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:07.595759  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:08.095454  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.095528  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.095798  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:08.095845  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:08.594864  890932 type.go:168] "Request Body" body=""
	I1208 00:35:08.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:08.595300  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.094946  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:09.594790  890932 type.go:168] "Request Body" body=""
	I1208 00:35:09.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:09.595176  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.094901  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.094991  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.095347  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:10.595069  890932 type.go:168] "Request Body" body=""
	I1208 00:35:10.595149  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:10.595526  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:10.595586  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:11.095553  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.095640  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.095940  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:11.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:11.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:11.596135  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.094800  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.094884  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.095243  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:12.594831  890932 type.go:168] "Request Body" body=""
	I1208 00:35:12.594901  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:12.595178  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:13.094902  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.094979  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:13.095363  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:13.595223  890932 type.go:168] "Request Body" body=""
	I1208 00:35:13.595301  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:13.595667  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.095197  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.095270  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.095550  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:14.595257  890932 type.go:168] "Request Body" body=""
	I1208 00:35:14.595339  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:14.595725  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:15.095585  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.095697  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.096126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:15.096187  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:15.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:15.594862  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:15.595129  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.094866  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.095307  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:16.595015  890932 type.go:168] "Request Body" body=""
	I1208 00:35:16.595117  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:16.595527  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.094794  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.094867  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.095198  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:17.594835  890932 type.go:168] "Request Body" body=""
	I1208 00:35:17.594911  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:17.595233  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:17.595290  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:18.094867  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.094954  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.095301  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:18.595435  890932 type.go:168] "Request Body" body=""
	I1208 00:35:18.595508  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:18.595780  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.095641  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.096078  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:19.594777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:19.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:19.595160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:20.094736  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.094818  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.095118  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:20.095178  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:20.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:20.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:20.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.094950  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.095027  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.095318  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:21.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:21.595748  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:21.596016  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:22.094791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.094877  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.095219  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:22.095276  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:22.594927  890932 type.go:168] "Request Body" body=""
	I1208 00:35:22.594995  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:22.595337  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.094807  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.094885  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.095213  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:23.594870  890932 type.go:168] "Request Body" body=""
	I1208 00:35:23.594949  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:23.595296  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:24.094865  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.094952  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:24.095323  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:24.595667  890932 type.go:168] "Request Body" body=""
	I1208 00:35:24.595742  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:24.596024  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.094819  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.095316  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:25.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:25.594932  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:25.595268  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:26.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.095967  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:26.096009  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:26.594774  890932 type.go:168] "Request Body" body=""
	I1208 00:35:26.594849  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:26.595220  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.094862  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.094942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.095279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:27.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:35:27.594858  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:27.595172  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.094940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.095286  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:28.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:35:28.594890  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:28.595239  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:28.595297  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:29.095621  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.095690  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:29.594756  890932 type.go:168] "Request Body" body=""
	I1208 00:35:29.594833  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:29.595168  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.094972  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.095063  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.095501  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:30.594791  890932 type.go:168] "Request Body" body=""
	I1208 00:35:30.594870  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:30.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:31.094879  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.094960  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.095299  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:31.095357  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:31.594860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:31.594942  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:31.595255  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.094784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.094855  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.095179  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:32.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:35:32.594962  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:32.595305  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:33.095036  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.095132  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.095524  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:33.095581  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:33.594888  890932 type.go:168] "Request Body" body=""
	I1208 00:35:33.594964  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:33.595242  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.094933  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.095009  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.095392  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:34.594946  890932 type.go:168] "Request Body" body=""
	I1208 00:35:34.595024  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:34.595376  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.095094  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.095178  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.095522  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:35.594857  890932 type.go:168] "Request Body" body=""
	I1208 00:35:35.594940  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:35.595245  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:35.595291  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:36.094854  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.094931  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.095261  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:36.594784  890932 type.go:168] "Request Body" body=""
	I1208 00:35:36.594860  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:36.595205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.095298  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:37.594853  890932 type.go:168] "Request Body" body=""
	I1208 00:35:37.594935  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:37.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:37.595344  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:38.095615  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.095691  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.095993  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:38.594841  890932 type.go:168] "Request Body" body=""
	I1208 00:35:38.594921  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:38.595236  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.094849  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.094933  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.095272  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:39.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:35:39.594893  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:39.595159  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:40.094832  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.094914  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.095308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:40.095383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:40.595050  890932 type.go:168] "Request Body" body=""
	I1208 00:35:40.595133  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:40.595476  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.095165  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.095247  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.095601  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:41.595432  890932 type.go:168] "Request Body" body=""
	I1208 00:35:41.595533  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:41.595908  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:42.095710  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.095822  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.096304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:42.096383  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:42.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:35:42.594857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:42.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.094948  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:43.595136  890932 type.go:168] "Request Body" body=""
	I1208 00:35:43.595212  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:43.595549  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.095717  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.095796  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.096072  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:44.594804  890932 type.go:168] "Request Body" body=""
	I1208 00:35:44.594891  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:44.595279  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:44.595340  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:45.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.094993  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.095422  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:45.595058  890932 type.go:168] "Request Body" body=""
	I1208 00:35:45.595128  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:45.595471  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.095186  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.095266  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.095625  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:46.595402  890932 type.go:168] "Request Body" body=""
	I1208 00:35:46.595481  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:46.595824  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:46.595879  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:47.095525  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.095868  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:47.595618  890932 type.go:168] "Request Body" body=""
	I1208 00:35:47.595696  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:47.596010  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.095716  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.095799  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.096202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:48.595337  890932 type.go:168] "Request Body" body=""
	I1208 00:35:48.595413  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:48.595706  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:49.095444  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.095524  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.095902  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:49.095961  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:49.595540  890932 type.go:168] "Request Body" body=""
	I1208 00:35:49.595625  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:49.595976  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.095709  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.095792  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.096095  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:50.594799  890932 type.go:168] "Request Body" body=""
	I1208 00:35:50.594874  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:50.595249  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.094959  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.095064  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.095433  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:51.594801  890932 type.go:168] "Request Body" body=""
	I1208 00:35:51.594875  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:51.595228  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:51.595287  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:52.094896  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.094975  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.095331  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:52.595042  890932 type.go:168] "Request Body" body=""
	I1208 00:35:52.595124  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:52.595480  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.094777  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.095139  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:53.594836  890932 type.go:168] "Request Body" body=""
	I1208 00:35:53.594937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:53.595282  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:53.595384  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:54.094872  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.095335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:54.595658  890932 type.go:168] "Request Body" body=""
	I1208 00:35:54.595729  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:54.596021  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.094747  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.094842  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:55.594895  890932 type.go:168] "Request Body" body=""
	I1208 00:35:55.594977  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:55.595323  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:56.095674  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.095747  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.096062  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:56.096108  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:56.594963  890932 type.go:168] "Request Body" body=""
	I1208 00:35:56.595039  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:56.595371  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.094851  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.094934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.095302  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:57.594883  890932 type.go:168] "Request Body" body=""
	I1208 00:35:57.594996  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:57.595394  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.095103  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.095186  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.095515  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:58.595715  890932 type.go:168] "Request Body" body=""
	I1208 00:35:58.595795  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:58.596169  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:35:58.596227  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:35:59.095645  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.095725  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.096039  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:35:59.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:35:59.594804  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:35:59.595133  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.094929  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.095015  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.095342  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:00.595183  890932 type.go:168] "Request Body" body=""
	I1208 00:36:00.595265  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:00.595623  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:01.095438  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.095520  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.095859  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:01.095916  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:01.595630  890932 type.go:168] "Request Body" body=""
	I1208 00:36:01.595708  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:01.596080  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.095668  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.096058  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:02.594816  890932 type.go:168] "Request Body" body=""
	I1208 00:36:02.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:02.595265  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.094824  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.094905  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.095283  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:03.594768  890932 type.go:168] "Request Body" body=""
	I1208 00:36:03.594871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:03.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:03.595263  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:04.094859  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.094958  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.095267  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:04.594813  890932 type.go:168] "Request Body" body=""
	I1208 00:36:04.594897  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:04.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.095649  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.095720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:05.595766  890932 type.go:168] "Request Body" body=""
	I1208 00:36:05.595851  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:05.596204  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:05.596299  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:06.094858  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.094937  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.095304  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:06.595639  890932 type.go:168] "Request Body" body=""
	I1208 00:36:06.595720  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:06.596054  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.094760  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.094857  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.095153  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:07.594898  890932 type.go:168] "Request Body" body=""
	I1208 00:36:07.594972  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:07.595325  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:08.095635  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.095713  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.095986  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:08.096028  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:08.595142  890932 type.go:168] "Request Body" body=""
	I1208 00:36:08.595227  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:08.595555  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.095287  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.095364  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.095690  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:09.595392  890932 type.go:168] "Request Body" body=""
	I1208 00:36:09.595461  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:09.595724  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.095517  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.095598  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:10.595700  890932 type.go:168] "Request Body" body=""
	I1208 00:36:10.595784  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:10.596160  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:10.596216  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:11.094775  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.094850  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.095194  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:11.594837  890932 type.go:168] "Request Body" body=""
	I1208 00:36:11.594917  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:11.595266  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.094980  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.095061  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.095386  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:12.594781  890932 type.go:168] "Request Body" body=""
	I1208 00:36:12.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:12.595126  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:13.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.094912  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.095288  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:13.095347  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:13.595000  890932 type.go:168] "Request Body" body=""
	I1208 00:36:13.595104  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:13.595437  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.095097  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.095172  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.095450  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:14.595177  890932 type.go:168] "Request Body" body=""
	I1208 00:36:14.595281  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:14.595679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:15.095515  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.095616  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.096002  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:15.096068  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:15.595565  890932 type.go:168] "Request Body" body=""
	I1208 00:36:15.595677  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:15.595994  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.094724  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.094815  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.095174  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:16.594849  890932 type.go:168] "Request Body" body=""
	I1208 00:36:16.594934  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:16.595308  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.094788  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.094859  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.095173  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:17.594829  890932 type.go:168] "Request Body" body=""
	I1208 00:36:17.594913  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:17.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:17.595272  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:18.094841  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.094920  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.095270  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:18.594723  890932 type.go:168] "Request Body" body=""
	I1208 00:36:18.594793  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:18.595065  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.094767  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.094863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.095240  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:19.594850  890932 type.go:168] "Request Body" body=""
	I1208 00:36:19.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:19.595263  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:19.595322  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:20.095644  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.095737  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.096099  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:20.594808  890932 type.go:168] "Request Body" body=""
	I1208 00:36:20.594887  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:20.595234  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.094954  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.095363  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:21.595677  890932 type.go:168] "Request Body" body=""
	I1208 00:36:21.595750  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:21.596077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:21.596147  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:22.094860  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.094938  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.095256  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:22.594861  890932 type.go:168] "Request Body" body=""
	I1208 00:36:22.594939  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:22.595289  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.095643  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.095723  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.096019  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:23.595048  890932 type.go:168] "Request Body" body=""
	I1208 00:36:23.595147  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:23.595567  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:24.095396  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.095478  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.095907  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:24.095979  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:24.595457  890932 type.go:168] "Request Body" body=""
	I1208 00:36:24.595529  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:24.595803  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.095586  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.095668  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.096057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:25.595744  890932 type.go:168] "Request Body" body=""
	I1208 00:36:25.595838  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:25.596274  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:26.095659  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.095743  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.096092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:26.096144  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:26.594794  890932 type.go:168] "Request Body" body=""
	I1208 00:36:26.594918  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:26.595273  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.094852  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.094928  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.095252  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:27.594796  890932 type.go:168] "Request Body" body=""
	I1208 00:36:27.594876  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:27.595170  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.094830  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.094910  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.095241  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:28.595353  890932 type.go:168] "Request Body" body=""
	I1208 00:36:28.595430  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:28.595768  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:28.595815  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:29.095242  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.095310  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.095629  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:29.595200  890932 type.go:168] "Request Body" body=""
	I1208 00:36:29.595280  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:29.595637  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.095238  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.095745  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:30.595479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:30.595561  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:30.595834  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:30.595883  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:31.095683  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.095759  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.096119  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:31.594834  890932 type.go:168] "Request Body" body=""
	I1208 00:36:31.594916  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:31.595238  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.094743  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.094812  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.095077  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:32.594785  890932 type.go:168] "Request Body" body=""
	I1208 00:36:32.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:32.595202  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:33.094957  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.095036  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.095413  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:33.095470  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:33.595484  890932 type.go:168] "Request Body" body=""
	I1208 00:36:33.595560  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:33.595823  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.095676  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.095765  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.096127  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:34.594821  890932 type.go:168] "Request Body" body=""
	I1208 00:36:34.594900  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:34.595207  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.094778  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.094861  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.095205  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:35.594899  890932 type.go:168] "Request Body" body=""
	I1208 00:36:35.594983  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:35.595332  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:35.595390  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:36.095114  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.095205  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.095569  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:36.595347  890932 type.go:168] "Request Body" body=""
	I1208 00:36:36.595414  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:36.595677  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.095479  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.095557  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.095923  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:37.595648  890932 type.go:168] "Request Body" body=""
	I1208 00:36:37.595731  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:37.596092  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:37.596146  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:38.095610  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.095685  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.095965  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:38.595066  890932 type.go:168] "Request Body" body=""
	I1208 00:36:38.595156  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:38.595538  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.095256  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.095338  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.095679  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:39.595429  890932 type.go:168] "Request Body" body=""
	I1208 00:36:39.595505  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:39.595772  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:40.095636  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.095721  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.096088  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:40.096154  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:40.594798  890932 type.go:168] "Request Body" body=""
	I1208 00:36:40.594895  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:40.595226  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.094802  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.094871  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.095218  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:41.594911  890932 type.go:168] "Request Body" body=""
	I1208 00:36:41.594990  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:41.595335  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.094885  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.094978  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.095379  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:42.595088  890932 type.go:168] "Request Body" body=""
	I1208 00:36:42.595162  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:42.595436  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:42.595481  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:43.094831  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.094915  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.095253  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:43.594838  890932 type.go:168] "Request Body" body=""
	I1208 00:36:43.594925  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:43.595271  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.095602  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.095672  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.095992  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:44.595789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:44.595878  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:44.596229  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:44.596286  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:45.095008  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.095095  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.095519  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:45.594877  890932 type.go:168] "Request Body" body=""
	I1208 00:36:45.594961  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:45.595315  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.094844  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.094926  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.095284  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:46.594993  890932 type.go:168] "Request Body" body=""
	I1208 00:36:46.595078  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:46.595451  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:47.094715  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.094789  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.095057  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1208 00:36:47.095099  890932 node_ready.go:55] error getting node "functional-386544" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-386544": dial tcp 192.168.49.2:8441: connect: connection refused
	I1208 00:36:47.594746  890932 type.go:168] "Request Body" body=""
	I1208 00:36:47.594824  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:47.595163  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.095765  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.095845  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.096257  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:48.594789  890932 type.go:168] "Request Body" body=""
	I1208 00:36:48.594863  890932 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-386544" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1208 00:36:48.595186  890932 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1208 00:36:49.094763  890932 type.go:168] "Request Body" body=""
	I1208 00:36:49.094842  890932 node_ready.go:38] duration metric: took 6m0.000209264s for node "functional-386544" to be "Ready" ...
	I1208 00:36:49.097838  890932 out.go:203] 
	W1208 00:36:49.100712  890932 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 00:36:49.100735  890932 out.go:285] * 
	W1208 00:36:49.102896  890932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:36:49.105576  890932 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:36:56 functional-386544 containerd[5240]: time="2025-12-08T00:36:56.516755869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.589473531Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.594729164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.602007811Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:57 functional-386544 containerd[5240]: time="2025-12-08T00:36:57.602698149Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.551574100Z" level=info msg="No images store for sha256:c14738a2f1514ed626f87e54ffcc2c6b1b148fca2fa2ca921f3182937ee7a0a8"
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.554054710Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-386544\""
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.561256072Z" level=info msg="ImageCreate event name:\"sha256:ab3bd7310ba004a6221e62971b0d92cf8ea1c77a8c7be89dbbba101e42fb246f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:58 functional-386544 containerd[5240]: time="2025-12-08T00:36:58.561898500Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.382483877Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.385056715Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.387136320Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 08 00:36:59 functional-386544 containerd[5240]: time="2025-12-08T00:36:59.399049776Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.438058682Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.440899083Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.444347475Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.451112114Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.638310190Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.641281916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.648394300Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.648713106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.824579526Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.826728482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.833855512Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:37:00 functional-386544 containerd[5240]: time="2025-12-08T00:37:00.834577965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:37:05.050518    9415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:05.051287    9415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:05.052966    9415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:05.053563    9415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:37:05.055090    9415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:37:05 up  5:19,  0 user,  load average: 0.52, 0.44, 1.06
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:37:02 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:02 functional-386544 kubelet[9193]: E1208 00:37:02.164520    9193 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 08 00:37:02 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:02 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:02 functional-386544 kubelet[9290]: E1208 00:37:02.904641    9290 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:02 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:03 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 08 00:37:03 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:03 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:03 functional-386544 kubelet[9309]: E1208 00:37:03.613737    9309 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:03 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:03 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:04 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 08 00:37:04 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:04 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:04 functional-386544 kubelet[9332]: E1208 00:37:04.427374    9332 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:37:04 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:37:04 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:37:05 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 831.
	Dec 08 00:37:05 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:37:05 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (379.748257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (733.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1208 00:39:30.128280  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:41:28.221323  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:42:51.294275  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:44:30.127901  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:46:28.223343  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m10.841360464s)

                                                
                                                
-- stdout --
	* [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00008019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-386544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m10.84235835s for "functional-386544" cluster.
I1208 00:49:16.809933  846711 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (349.460255ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-932121 image ls --format yaml --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format json --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format table --alsologtostderr                                                                                             │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh     │ functional-932121 ssh pgrep buildkitd                                                                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image   │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                  │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls                                                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete  │ -p functional-932121                                                                                                                                    │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start   │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ start   │ -p functional-386544 --alsologtostderr -v=8                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:30 UTC │                     │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:latest                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add minikube-local-cache-test:functional-386544                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache delete minikube-local-cache-test:functional-386544                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl images                                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │                     │
	│ cache   │ functional-386544 cache reload                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:37 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ kubectl │ functional-386544 kubectl -- --context functional-386544 get pods                                                                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	│ start   │ -p functional-386544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:37:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:37:06.019721  896760 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:37:06.019851  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.019855  896760 out.go:374] Setting ErrFile to fd 2...
	I1208 00:37:06.019858  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.020163  896760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:37:06.020664  896760 out.go:368] Setting JSON to false
	I1208 00:37:06.021613  896760 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":19179,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:37:06.021695  896760 start.go:143] virtualization:  
	I1208 00:37:06.025173  896760 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:37:06.029087  896760 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:37:06.029181  896760 notify.go:221] Checking for updates...
	I1208 00:37:06.035043  896760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:37:06.037984  896760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:37:06.041170  896760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:37:06.044080  896760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:37:06.047053  896760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:37:06.050554  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:06.050663  896760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:37:06.082313  896760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:37:06.082426  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.147928  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.138471154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.148036  896760 docker.go:319] overlay module found
	I1208 00:37:06.151011  896760 out.go:179] * Using the docker driver based on existing profile
	I1208 00:37:06.153817  896760 start.go:309] selected driver: docker
	I1208 00:37:06.153826  896760 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.153925  896760 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:37:06.154035  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.211588  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.202265066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.212013  896760 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:37:06.212038  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:06.212099  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:06.212152  896760 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.217244  896760 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:37:06.220210  896760 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:37:06.223461  896760 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:37:06.226522  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:06.226581  896760 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:37:06.226589  896760 cache.go:65] Caching tarball of preloaded images
	I1208 00:37:06.226692  896760 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:37:06.226679  896760 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:37:06.226706  896760 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:37:06.226817  896760 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:37:06.250884  896760 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:37:06.250894  896760 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:37:06.250908  896760 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:37:06.250945  896760 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:37:06.250999  896760 start.go:364] duration metric: took 38.401µs to acquireMachinesLock for "functional-386544"
	I1208 00:37:06.251017  896760 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:37:06.251022  896760 fix.go:54] fixHost starting: 
	I1208 00:37:06.251283  896760 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:37:06.268900  896760 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:37:06.268920  896760 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:37:06.272102  896760 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:37:06.272127  896760 machine.go:94] provisionDockerMachine start ...
	I1208 00:37:06.272215  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.289500  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.289831  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.289837  896760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:37:06.446749  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.446764  896760 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:37:06.446826  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.466658  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.466960  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.466968  896760 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:37:06.637199  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.637280  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.656923  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.657245  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.657259  896760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:37:06.810893  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:37:06.810908  896760 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:37:06.810925  896760 ubuntu.go:190] setting up certificates
	I1208 00:37:06.810935  896760 provision.go:84] configureAuth start
	I1208 00:37:06.811016  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:06.829686  896760 provision.go:143] copyHostCerts
	I1208 00:37:06.829765  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:37:06.829784  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:37:06.829861  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:37:06.829960  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:37:06.829964  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:37:06.829992  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:37:06.830039  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:37:06.830042  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:37:06.830063  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:37:06.830106  896760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:37:07.178648  896760 provision.go:177] copyRemoteCerts
	I1208 00:37:07.178704  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:37:07.178748  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.196483  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.308383  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:37:07.329033  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:37:07.348621  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:37:07.367658  896760 provision.go:87] duration metric: took 556.701814ms to configureAuth
	I1208 00:37:07.367675  896760 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:37:07.367867  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:07.367872  896760 machine.go:97] duration metric: took 1.095740792s to provisionDockerMachine
	I1208 00:37:07.367878  896760 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:37:07.367889  896760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:37:07.367938  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:37:07.367977  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.392993  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.498867  896760 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:37:07.502617  896760 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:37:07.502635  896760 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:37:07.502647  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:37:07.502710  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:37:07.502786  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:37:07.502867  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:37:07.502912  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:37:07.511139  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:07.530267  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:37:07.549480  896760 start.go:296] duration metric: took 181.586948ms for postStartSetup
	I1208 00:37:07.549558  896760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:37:07.549616  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.567759  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.671740  896760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:37:07.676721  896760 fix.go:56] duration metric: took 1.425689657s for fixHost
	I1208 00:37:07.676741  896760 start.go:83] releasing machines lock for "functional-386544", held for 1.425734498s
	I1208 00:37:07.676811  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:07.694624  896760 ssh_runner.go:195] Run: cat /version.json
	I1208 00:37:07.694669  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.694717  896760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:37:07.694775  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.720790  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.720932  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.911560  896760 ssh_runner.go:195] Run: systemctl --version
	I1208 00:37:07.918241  896760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:37:07.922676  896760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:37:07.922750  896760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:37:07.930831  896760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:37:07.930844  896760 start.go:496] detecting cgroup driver to use...
	I1208 00:37:07.930875  896760 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:37:07.930921  896760 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:37:07.947115  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:37:07.961050  896760 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:37:07.961113  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:37:07.977365  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:37:07.991192  896760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:37:08.126175  896760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:37:08.269608  896760 docker.go:234] disabling docker service ...
	I1208 00:37:08.269664  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:37:08.284945  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:37:08.299108  896760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:37:08.432565  896760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:37:08.555248  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:37:08.569474  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:37:08.585412  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:37:08.595004  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:37:08.604840  896760 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:37:08.604902  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:37:08.613812  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.623203  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:37:08.633142  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.643038  896760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:37:08.652239  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:37:08.661623  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:37:08.671250  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:37:08.680657  896760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:37:08.688616  896760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:37:08.696764  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:08.823042  896760 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:37:08.984184  896760 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:37:08.984277  896760 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:37:08.989088  896760 start.go:564] Will wait 60s for crictl version
	I1208 00:37:08.989158  896760 ssh_runner.go:195] Run: which crictl
	I1208 00:37:08.993493  896760 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:37:09.024246  896760 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:37:09.024323  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.048155  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.074342  896760 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:37:09.077377  896760 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:37:09.094080  896760 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:37:09.101988  896760 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:37:09.104771  896760 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:37:09.104921  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:09.104997  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.131121  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.131133  896760 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:37:09.131193  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.156235  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.156250  896760 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:37:09.156277  896760 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:37:09.156381  896760 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:37:09.156452  896760 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:37:09.182781  896760 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:37:09.182799  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:09.182812  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:09.182826  896760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:37:09.182847  896760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:37:09.182951  896760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:37:09.183025  896760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:37:09.190958  896760 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:37:09.191018  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:37:09.198701  896760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:37:09.211735  896760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:37:09.225024  896760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1208 00:37:09.237969  896760 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:37:09.241818  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:09.362221  896760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:37:09.592794  896760 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:37:09.592805  896760 certs.go:195] generating shared ca certs ...
	I1208 00:37:09.592820  896760 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:37:09.592963  896760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:37:09.593013  896760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:37:09.593019  896760 certs.go:257] generating profile certs ...
	I1208 00:37:09.593102  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:37:09.593154  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:37:09.593193  896760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:37:09.593299  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:37:09.593334  896760 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:37:09.593340  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:37:09.593370  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:37:09.593392  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:37:09.593414  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:37:09.593455  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:09.594053  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:37:09.614864  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:37:09.633613  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:37:09.652858  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:37:09.672208  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:37:09.691703  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:37:09.711394  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:37:09.730947  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:37:09.750211  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:37:09.769149  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:37:09.787710  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:37:09.806312  896760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:37:09.821128  896760 ssh_runner.go:195] Run: openssl version
	I1208 00:37:09.827672  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.835407  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:37:09.843631  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847882  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847954  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.890017  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:37:09.897920  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.905917  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:37:09.913958  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918017  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918088  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.960169  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:37:09.968154  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.975996  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:37:09.984080  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988210  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988283  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:37:10.030981  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:37:10.040434  896760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:37:10.045482  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:37:10.089037  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:37:10.131753  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:37:10.174120  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:37:10.216988  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:37:10.258490  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:37:10.300139  896760 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:10.300218  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:37:10.300290  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.333072  896760 cri.go:89] found id: ""
	I1208 00:37:10.333133  896760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:37:10.342949  896760 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:37:10.342966  896760 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:37:10.343020  896760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:37:10.351917  896760 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.352512  896760 kubeconfig.go:125] found "functional-386544" server: "https://192.168.49.2:8441"
	I1208 00:37:10.356488  896760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:37:10.371422  896760 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:22:35.509962182 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:37:09.232874988 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:37:10.371434  896760 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:37:10.371448  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1208 00:37:10.371510  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.399030  896760 cri.go:89] found id: ""
	I1208 00:37:10.399096  896760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:37:10.416716  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:37:10.425417  896760 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  8 00:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  8 00:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  8 00:26 /etc/kubernetes/scheduler.conf
	
	I1208 00:37:10.425491  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:37:10.433870  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:37:10.441918  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.441981  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:37:10.450104  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.458339  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.458406  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.466222  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:37:10.474083  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.474143  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:37:10.482138  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:37:10.490230  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:10.544026  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.386589  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.605461  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.662330  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.710396  896760 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:37:11.710500  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.210751  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.710625  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.211368  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.710629  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.210663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.710895  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.211137  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.711373  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.211351  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.710569  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.210608  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.710907  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.211191  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.710689  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.210845  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.710623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.211163  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.711542  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.210600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.710988  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.210661  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.710658  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.210891  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.711295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.210648  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.710685  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.211112  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.711299  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.210714  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.710657  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.710683  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.210651  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.711193  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.210592  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.710674  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.211143  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.711278  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.211249  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.711431  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.211577  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.711520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.711607  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.210653  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.711085  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.211213  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.210570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.710652  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.210844  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.710667  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.210595  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.710997  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.210972  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.710639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.211558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.711501  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.211600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.711606  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.211418  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.711303  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.210746  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.710559  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.210639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.710659  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.211497  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.711558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.211385  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.710636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.210636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.710883  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.211287  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.210917  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.710809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.210623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.710673  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.210672  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.210578  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.710671  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.210617  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.711226  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.211295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.711314  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.211406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.711446  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.211464  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.710703  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.211414  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.711319  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.210772  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.710561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.211386  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.710908  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.211262  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.710640  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.211590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.710555  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.211517  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.711490  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.211619  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.710621  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.710668  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.211521  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.711350  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.211355  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.711224  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.211378  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.710638  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.210954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.710606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:11.710708  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:11.736420  896760 cri.go:89] found id: ""
	I1208 00:38:11.736434  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.736442  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:11.736447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:11.736514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:11.760217  896760 cri.go:89] found id: ""
	I1208 00:38:11.760231  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.760238  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:11.760243  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:11.760300  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:11.784869  896760 cri.go:89] found id: ""
	I1208 00:38:11.784882  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.784895  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:11.784900  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:11.784963  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:11.809329  896760 cri.go:89] found id: ""
	I1208 00:38:11.809345  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.809352  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:11.809357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:11.809412  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:11.836937  896760 cri.go:89] found id: ""
	I1208 00:38:11.836951  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.836958  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:11.836964  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:11.837022  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:11.861979  896760 cri.go:89] found id: ""
	I1208 00:38:11.861993  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.862000  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:11.862006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:11.862067  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:11.891173  896760 cri.go:89] found id: ""
	I1208 00:38:11.891187  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.891194  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:11.891202  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:11.891213  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:11.958401  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:11.958411  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:11.958422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:12.022654  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:12.022674  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:12.054077  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:12.054093  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:12.115415  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:12.115439  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.631602  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:14.646925  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:14.646987  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:14.674503  896760 cri.go:89] found id: ""
	I1208 00:38:14.674517  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.674524  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:14.674529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:14.674593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:14.699396  896760 cri.go:89] found id: ""
	I1208 00:38:14.699419  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.699426  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:14.699432  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:14.699503  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:14.724021  896760 cri.go:89] found id: ""
	I1208 00:38:14.724034  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.724042  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:14.724047  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:14.724106  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:14.753658  896760 cri.go:89] found id: ""
	I1208 00:38:14.753672  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.753679  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:14.753684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:14.753749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:14.781621  896760 cri.go:89] found id: ""
	I1208 00:38:14.781635  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.781643  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:14.781649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:14.781707  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:14.807494  896760 cri.go:89] found id: ""
	I1208 00:38:14.807509  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.807516  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:14.807521  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:14.807593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:14.833097  896760 cri.go:89] found id: ""
	I1208 00:38:14.833112  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.833119  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:14.833126  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:14.833136  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:14.889095  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:14.889114  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.903785  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:14.903800  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:14.971093  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:14.971115  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:14.971126  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:15.034725  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:15.034748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:17.575615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:17.586181  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:17.586244  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:17.614489  896760 cri.go:89] found id: ""
	I1208 00:38:17.614503  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.614510  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:17.614516  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:17.614591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:17.646217  896760 cri.go:89] found id: ""
	I1208 00:38:17.646238  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.646245  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:17.646250  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:17.646320  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:17.672670  896760 cri.go:89] found id: ""
	I1208 00:38:17.672684  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.672699  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:17.672705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:17.672771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:17.697872  896760 cri.go:89] found id: ""
	I1208 00:38:17.697886  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.697894  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:17.697899  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:17.697960  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:17.723061  896760 cri.go:89] found id: ""
	I1208 00:38:17.723075  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.723083  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:17.723088  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:17.723148  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:17.751201  896760 cri.go:89] found id: ""
	I1208 00:38:17.751215  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.751257  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:17.751263  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:17.751327  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:17.776877  896760 cri.go:89] found id: ""
	I1208 00:38:17.776898  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.776906  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:17.776914  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:17.776924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:17.833629  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:17.833648  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:17.848545  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:17.848562  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:17.916466  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:17.916477  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:17.916488  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:17.977728  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:17.977748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:20.518003  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:20.528606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:20.528668  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:20.553281  896760 cri.go:89] found id: ""
	I1208 00:38:20.553294  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.553301  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:20.553307  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:20.553362  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:20.578221  896760 cri.go:89] found id: ""
	I1208 00:38:20.578241  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.578249  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:20.578254  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:20.578315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:20.615636  896760 cri.go:89] found id: ""
	I1208 00:38:20.615650  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.615657  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:20.615662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:20.615717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:20.658083  896760 cri.go:89] found id: ""
	I1208 00:38:20.658097  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.658104  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:20.658109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:20.658167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:20.683361  896760 cri.go:89] found id: ""
	I1208 00:38:20.683375  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.683382  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:20.683387  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:20.683445  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:20.708740  896760 cri.go:89] found id: ""
	I1208 00:38:20.708754  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.708761  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:20.708767  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:20.708830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:20.733148  896760 cri.go:89] found id: ""
	I1208 00:38:20.733162  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.733169  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:20.733177  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:20.733187  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:20.789345  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:20.789364  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:20.804329  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:20.804344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:20.869258  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:20.869270  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:20.869280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:20.935198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:20.935220  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:23.463419  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:23.473440  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:23.473514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:23.498380  896760 cri.go:89] found id: ""
	I1208 00:38:23.498395  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.498402  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:23.498407  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:23.498504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:23.524663  896760 cri.go:89] found id: ""
	I1208 00:38:23.524677  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.524683  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:23.524689  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:23.524749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:23.554276  896760 cri.go:89] found id: ""
	I1208 00:38:23.554300  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.554308  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:23.554314  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:23.554373  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:23.581295  896760 cri.go:89] found id: ""
	I1208 00:38:23.581310  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.581317  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:23.581322  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:23.581394  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:23.609485  896760 cri.go:89] found id: ""
	I1208 00:38:23.609499  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.609506  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:23.609512  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:23.609568  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:23.639329  896760 cri.go:89] found id: ""
	I1208 00:38:23.639343  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.639350  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:23.639356  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:23.639415  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:23.666775  896760 cri.go:89] found id: ""
	I1208 00:38:23.666789  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.666796  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:23.666804  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:23.666816  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:23.726052  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:23.726071  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:23.741283  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:23.741300  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:23.814882  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:23.814894  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:23.814918  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:23.882172  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:23.882191  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.416809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:26.427382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:26.427441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:26.455816  896760 cri.go:89] found id: ""
	I1208 00:38:26.455831  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.455838  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:26.455843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:26.455901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:26.481460  896760 cri.go:89] found id: ""
	I1208 00:38:26.481475  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.481482  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:26.481487  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:26.481552  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:26.511736  896760 cri.go:89] found id: ""
	I1208 00:38:26.511750  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.511757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:26.511764  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:26.511824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:26.538164  896760 cri.go:89] found id: ""
	I1208 00:38:26.538185  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.538192  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:26.538197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:26.538263  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:26.564400  896760 cri.go:89] found id: ""
	I1208 00:38:26.564415  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.564423  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:26.564428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:26.564499  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:26.592652  896760 cri.go:89] found id: ""
	I1208 00:38:26.592666  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.592684  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:26.592690  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:26.592756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:26.628887  896760 cri.go:89] found id: ""
	I1208 00:38:26.628913  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.628920  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:26.628928  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:26.628939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:26.645510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:26.645526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:26.715196  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:26.715212  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:26.715223  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:26.776374  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:26.776415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.805091  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:26.805108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.367761  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:29.378770  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:29.378841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:29.407904  896760 cri.go:89] found id: ""
	I1208 00:38:29.407918  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.407925  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:29.407937  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:29.407996  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:29.439249  896760 cri.go:89] found id: ""
	I1208 00:38:29.439263  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.439270  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:29.439275  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:29.439335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:29.464738  896760 cri.go:89] found id: ""
	I1208 00:38:29.464752  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.464760  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:29.464765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:29.464821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:29.491063  896760 cri.go:89] found id: ""
	I1208 00:38:29.491077  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.491085  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:29.491094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:29.491170  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:29.516981  896760 cri.go:89] found id: ""
	I1208 00:38:29.516995  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.517003  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:29.517008  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:29.517068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:29.542623  896760 cri.go:89] found id: ""
	I1208 00:38:29.542637  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.542644  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:29.542649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:29.542706  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:29.568339  896760 cri.go:89] found id: ""
	I1208 00:38:29.568354  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.568361  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:29.568368  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:29.568377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.628127  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:29.628145  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:29.643477  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:29.643493  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:29.719175  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:29.719187  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:29.719198  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:29.782292  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:29.782317  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.310785  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:32.321344  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:32.321408  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:32.347142  896760 cri.go:89] found id: ""
	I1208 00:38:32.347156  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.347163  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:32.347184  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:32.347243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:32.372733  896760 cri.go:89] found id: ""
	I1208 00:38:32.372748  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.372784  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:32.372789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:32.372848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:32.397366  896760 cri.go:89] found id: ""
	I1208 00:38:32.397381  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.397388  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:32.397394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:32.397458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:32.422998  896760 cri.go:89] found id: ""
	I1208 00:38:32.423012  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.423019  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:32.423025  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:32.423092  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:32.454075  896760 cri.go:89] found id: ""
	I1208 00:38:32.454089  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.454096  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:32.454102  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:32.454163  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:32.480907  896760 cri.go:89] found id: ""
	I1208 00:38:32.480931  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.480938  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:32.480945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:32.481033  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:32.508537  896760 cri.go:89] found id: ""
	I1208 00:38:32.508551  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.508559  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:32.508567  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:32.508577  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.536959  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:32.536977  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:32.594663  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:32.594683  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:32.611007  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:32.611023  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:32.685259  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:32.685271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:32.685293  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.252296  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.262679  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:35.262743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:35.288362  896760 cri.go:89] found id: ""
	I1208 00:38:35.288376  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.288384  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:35.288389  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:35.288459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:35.316681  896760 cri.go:89] found id: ""
	I1208 00:38:35.316694  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.316702  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:35.316708  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:35.316771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:35.341646  896760 cri.go:89] found id: ""
	I1208 00:38:35.341661  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.341668  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:35.341673  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:35.341737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:35.367257  896760 cri.go:89] found id: ""
	I1208 00:38:35.367271  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.367278  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:35.367284  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:35.367343  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:35.391511  896760 cri.go:89] found id: ""
	I1208 00:38:35.391526  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.391533  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:35.391538  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:35.391607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:35.416046  896760 cri.go:89] found id: ""
	I1208 00:38:35.416059  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.416067  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:35.416073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:35.416186  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:35.441892  896760 cri.go:89] found id: ""
	I1208 00:38:35.441906  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.441913  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:35.441921  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:35.441930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:35.498141  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:35.498159  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:35.513190  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:35.513206  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:35.577909  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:35.577920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:35.577930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.650521  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:35.650540  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:38.186415  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.196707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:38.196765  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:38.224642  896760 cri.go:89] found id: ""
	I1208 00:38:38.224656  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.224662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:38.224667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:38.224727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:38.250371  896760 cri.go:89] found id: ""
	I1208 00:38:38.250385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.250393  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:38.250397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:38.250490  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:38.275798  896760 cri.go:89] found id: ""
	I1208 00:38:38.275813  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.275820  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:38.275825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:38.275889  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:38.301371  896760 cri.go:89] found id: ""
	I1208 00:38:38.301385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.301393  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:38.301398  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:38.301458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:38.326436  896760 cri.go:89] found id: ""
	I1208 00:38:38.326475  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.326483  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:38.326489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:38.326548  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:38.352684  896760 cri.go:89] found id: ""
	I1208 00:38:38.352698  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.352705  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:38.352711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:38.352770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:38.377358  896760 cri.go:89] found id: ""
	I1208 00:38:38.377372  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.377379  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:38.377424  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:38.377434  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:38.433300  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:38.433319  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:38.448010  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:38.448031  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:38.509419  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:38.509429  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:38.509441  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:38.573641  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:38.573660  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:41.124146  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.134622  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:41.134687  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:41.158822  896760 cri.go:89] found id: ""
	I1208 00:38:41.158837  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.158844  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:41.158850  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:41.158907  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:41.183538  896760 cri.go:89] found id: ""
	I1208 00:38:41.183552  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.183559  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:41.183564  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:41.183621  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:41.211762  896760 cri.go:89] found id: ""
	I1208 00:38:41.211776  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.211783  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:41.211789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:41.211846  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:41.237660  896760 cri.go:89] found id: ""
	I1208 00:38:41.237674  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.237681  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:41.237687  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:41.237746  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:41.263629  896760 cri.go:89] found id: ""
	I1208 00:38:41.263644  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.263651  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:41.263656  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:41.263715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:41.289465  896760 cri.go:89] found id: ""
	I1208 00:38:41.289479  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.289486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:41.289498  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:41.289559  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:41.316932  896760 cri.go:89] found id: ""
	I1208 00:38:41.316948  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.316955  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:41.316963  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:41.316974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:41.380746  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:41.380766  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:41.395918  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:41.395934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:41.460910  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:41.460920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:41.460932  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:41.524405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:41.524433  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.057087  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.067409  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:44.067469  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:44.092978  896760 cri.go:89] found id: ""
	I1208 00:38:44.092992  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.093000  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:44.093005  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:44.093063  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:44.118425  896760 cri.go:89] found id: ""
	I1208 00:38:44.118439  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.118468  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:44.118473  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:44.118537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:44.147582  896760 cri.go:89] found id: ""
	I1208 00:38:44.147597  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.147605  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:44.147610  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:44.147672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:44.173039  896760 cri.go:89] found id: ""
	I1208 00:38:44.173052  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.173060  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:44.173066  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:44.173122  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:44.200035  896760 cri.go:89] found id: ""
	I1208 00:38:44.200048  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.200056  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:44.200064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:44.200124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:44.228628  896760 cri.go:89] found id: ""
	I1208 00:38:44.228643  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.228652  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:44.228658  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:44.228723  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:44.253637  896760 cri.go:89] found id: ""
	I1208 00:38:44.253651  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.253658  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:44.253666  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:44.253678  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.285985  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:44.286001  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:44.342819  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:44.342837  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:44.357562  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:44.357578  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:44.424802  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:44.424813  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:44.424823  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:46.987663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.998722  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:46.998782  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:47.028919  896760 cri.go:89] found id: ""
	I1208 00:38:47.028933  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.028941  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:47.028947  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:47.029019  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:47.054503  896760 cri.go:89] found id: ""
	I1208 00:38:47.054517  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.054524  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:47.054529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:47.054591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:47.080198  896760 cri.go:89] found id: ""
	I1208 00:38:47.080213  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.080220  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:47.080226  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:47.080295  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:47.109584  896760 cri.go:89] found id: ""
	I1208 00:38:47.109600  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.109615  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:47.109621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:47.109705  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:47.140105  896760 cri.go:89] found id: ""
	I1208 00:38:47.140121  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.140128  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:47.140134  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:47.140194  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:47.170105  896760 cri.go:89] found id: ""
	I1208 00:38:47.170119  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.170126  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:47.170131  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:47.170192  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:47.194381  896760 cri.go:89] found id: ""
	I1208 00:38:47.194396  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.194403  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:47.194411  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:47.194421  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:47.250853  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:47.250872  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:47.265858  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:47.265878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:47.337098  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:47.337113  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:47.337129  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:47.400033  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:47.400053  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:49.930600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.941210  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:49.941272  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:49.965928  896760 cri.go:89] found id: ""
	I1208 00:38:49.965942  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.965949  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:49.965954  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:49.966013  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:49.991571  896760 cri.go:89] found id: ""
	I1208 00:38:49.991585  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.991592  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:49.991597  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:49.991661  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:50.031199  896760 cri.go:89] found id: ""
	I1208 00:38:50.031218  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.031226  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:50.031233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:50.031308  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:50.058807  896760 cri.go:89] found id: ""
	I1208 00:38:50.058822  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.058830  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:50.058836  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:50.058898  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:50.089259  896760 cri.go:89] found id: ""
	I1208 00:38:50.089273  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.089281  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:50.089287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:50.089360  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:50.115363  896760 cri.go:89] found id: ""
	I1208 00:38:50.115377  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.115385  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:50.115391  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:50.115454  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:50.144975  896760 cri.go:89] found id: ""
	I1208 00:38:50.144990  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.144998  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:50.145006  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:50.145020  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:50.160213  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:50.160230  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:50.226659  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:50.226669  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:50.226681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:50.288844  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:50.288865  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:50.321807  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:50.321824  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:52.878758  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.892078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:52.892141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:52.919955  896760 cri.go:89] found id: ""
	I1208 00:38:52.919969  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.919977  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:52.919982  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:52.920041  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:52.946242  896760 cri.go:89] found id: ""
	I1208 00:38:52.946256  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.946264  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:52.946269  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:52.946331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:52.976452  896760 cri.go:89] found id: ""
	I1208 00:38:52.976467  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.976475  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:52.976480  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:52.976542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:53.005608  896760 cri.go:89] found id: ""
	I1208 00:38:53.005635  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.005644  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:53.005652  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:53.005729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:53.033758  896760 cri.go:89] found id: ""
	I1208 00:38:53.033773  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.033784  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:53.033789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:53.033848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:53.063554  896760 cri.go:89] found id: ""
	I1208 00:38:53.063568  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.063575  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:53.063581  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:53.063644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:53.093217  896760 cri.go:89] found id: ""
	I1208 00:38:53.093233  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.093241  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:53.093249  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:53.093260  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:53.152571  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:53.152591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:53.167769  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:53.167785  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:53.232572  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:53.232583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:53.232604  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:53.301625  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:53.301653  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:55.831231  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.843576  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:55.843680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:55.878177  896760 cri.go:89] found id: ""
	I1208 00:38:55.878191  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.878198  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:55.878203  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:55.878260  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:55.904640  896760 cri.go:89] found id: ""
	I1208 00:38:55.904660  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.904667  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:55.904672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:55.904729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:55.930143  896760 cri.go:89] found id: ""
	I1208 00:38:55.930156  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.930163  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:55.930168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:55.930223  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:55.954696  896760 cri.go:89] found id: ""
	I1208 00:38:55.954710  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.954717  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:55.954723  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:55.954779  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:55.979424  896760 cri.go:89] found id: ""
	I1208 00:38:55.979438  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.979445  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:55.979453  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:55.979513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:56.010864  896760 cri.go:89] found id: ""
	I1208 00:38:56.010879  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.010887  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:56.010893  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:56.010959  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:56.038141  896760 cri.go:89] found id: ""
	I1208 00:38:56.038155  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.038163  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:56.038171  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:56.038183  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:56.105328  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:56.105339  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:56.105350  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:56.167859  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:56.167878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:56.195618  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:56.195634  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:56.254386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:56.254406  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:58.770585  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.780949  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:58.781010  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:58.804624  896760 cri.go:89] found id: ""
	I1208 00:38:58.804638  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.804645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:58.804651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:58.804710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:58.830257  896760 cri.go:89] found id: ""
	I1208 00:38:58.830271  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.830278  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:58.830283  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:58.830341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:58.870359  896760 cri.go:89] found id: ""
	I1208 00:38:58.870383  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.870390  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:58.870396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:58.870501  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:58.897347  896760 cri.go:89] found id: ""
	I1208 00:38:58.897361  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.897368  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:58.897373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:58.897431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:58.927474  896760 cri.go:89] found id: ""
	I1208 00:38:58.927488  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.927496  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:58.927501  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:58.927563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:58.953358  896760 cri.go:89] found id: ""
	I1208 00:38:58.953372  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.953380  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:58.953386  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:58.953443  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:58.978092  896760 cri.go:89] found id: ""
	I1208 00:38:58.978107  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.978116  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:58.978124  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:58.978134  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:59.008505  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:59.008524  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:59.067065  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:59.067095  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:59.081827  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:59.081843  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:59.148151  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:59.148161  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:59.148172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:01.713848  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.724264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:01.724326  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:01.752237  896760 cri.go:89] found id: ""
	I1208 00:39:01.752251  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.752258  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:01.752264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:01.752325  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:01.778116  896760 cri.go:89] found id: ""
	I1208 00:39:01.778129  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.778136  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:01.778141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:01.778213  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:01.807711  896760 cri.go:89] found id: ""
	I1208 00:39:01.807725  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.807731  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:01.807737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:01.807798  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:01.836797  896760 cri.go:89] found id: ""
	I1208 00:39:01.836812  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.836820  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:01.836826  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:01.836884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:01.863221  896760 cri.go:89] found id: ""
	I1208 00:39:01.863235  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.863242  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:01.863247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:01.863307  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:01.902460  896760 cri.go:89] found id: ""
	I1208 00:39:01.902476  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.902483  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:01.902489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:01.902558  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:01.930861  896760 cri.go:89] found id: ""
	I1208 00:39:01.930874  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.930882  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:01.930889  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:01.930900  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:01.987172  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:01.987190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:02.006975  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:02.006993  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:02.075975  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:02.076005  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:02.076017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:02.142423  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:02.142453  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:04.675643  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.688662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:04.688743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:04.716050  896760 cri.go:89] found id: ""
	I1208 00:39:04.716065  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.716072  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:04.716078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:04.716141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:04.742668  896760 cri.go:89] found id: ""
	I1208 00:39:04.742682  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.742690  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:04.742695  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:04.742756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:04.769375  896760 cri.go:89] found id: ""
	I1208 00:39:04.769388  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.769396  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:04.769401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:04.769459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:04.795270  896760 cri.go:89] found id: ""
	I1208 00:39:04.795284  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.795291  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:04.795297  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:04.795354  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:04.822245  896760 cri.go:89] found id: ""
	I1208 00:39:04.822258  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.822265  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:04.822271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:04.822330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:04.859401  896760 cri.go:89] found id: ""
	I1208 00:39:04.859414  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.859422  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:04.859428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:04.859486  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:04.896707  896760 cri.go:89] found id: ""
	I1208 00:39:04.896721  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.896728  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:04.896736  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:04.896745  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:04.967586  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:04.967603  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:04.983057  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:04.983080  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:05.060799  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:05.060821  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:05.060832  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:05.123856  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:05.123875  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:07.653529  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.664109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:07.664168  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:07.689362  896760 cri.go:89] found id: ""
	I1208 00:39:07.689376  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.689383  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:07.689388  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:07.689448  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:07.714707  896760 cri.go:89] found id: ""
	I1208 00:39:07.714722  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.714729  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:07.714734  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:07.714792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:07.740750  896760 cri.go:89] found id: ""
	I1208 00:39:07.740765  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.740771  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:07.740777  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:07.740834  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:07.765622  896760 cri.go:89] found id: ""
	I1208 00:39:07.765637  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.765645  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:07.765650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:07.765714  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:07.790729  896760 cri.go:89] found id: ""
	I1208 00:39:07.790744  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.790751  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:07.790756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:07.790824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:07.821100  896760 cri.go:89] found id: ""
	I1208 00:39:07.821114  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.821122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:07.821127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:07.821185  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:07.855011  896760 cri.go:89] found id: ""
	I1208 00:39:07.855025  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.855042  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:07.855050  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:07.855061  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:07.916163  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:07.916184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:07.931656  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:07.931672  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:08.007997  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:08.008026  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:08.008039  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:08.079922  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:08.079944  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:10.614429  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.625953  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:10.626015  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:10.651687  896760 cri.go:89] found id: ""
	I1208 00:39:10.651701  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.651708  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:10.651714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:10.651774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:10.676412  896760 cri.go:89] found id: ""
	I1208 00:39:10.676426  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.676433  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:10.676439  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:10.676507  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:10.705971  896760 cri.go:89] found id: ""
	I1208 00:39:10.705986  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.705992  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:10.705998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:10.706058  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:10.730598  896760 cri.go:89] found id: ""
	I1208 00:39:10.730621  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.730629  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:10.730634  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:10.730695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:10.756672  896760 cri.go:89] found id: ""
	I1208 00:39:10.756694  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.756702  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:10.756707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:10.756770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:10.786647  896760 cri.go:89] found id: ""
	I1208 00:39:10.786671  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.786679  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:10.786685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:10.786753  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:10.813024  896760 cri.go:89] found id: ""
	I1208 00:39:10.813037  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.813045  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:10.813063  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:10.813074  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:10.870687  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:10.870718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:10.887434  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:10.887451  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:10.955043  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:10.955053  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:10.955064  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:11.016735  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:11.016756  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:13.547727  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.558158  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:13.558216  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:13.583025  896760 cri.go:89] found id: ""
	I1208 00:39:13.583045  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.583053  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:13.583058  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:13.583119  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:13.608731  896760 cri.go:89] found id: ""
	I1208 00:39:13.608744  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.608751  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:13.608756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:13.608815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:13.634817  896760 cri.go:89] found id: ""
	I1208 00:39:13.634831  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.634838  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:13.634843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:13.634905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:13.659255  896760 cri.go:89] found id: ""
	I1208 00:39:13.659269  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.659276  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:13.659281  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:13.659341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:13.683853  896760 cri.go:89] found id: ""
	I1208 00:39:13.683867  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.683882  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:13.683888  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:13.683949  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:13.708780  896760 cri.go:89] found id: ""
	I1208 00:39:13.708795  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.708802  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:13.708807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:13.708864  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:13.734678  896760 cri.go:89] found id: ""
	I1208 00:39:13.734692  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.734699  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:13.734708  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:13.734718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:13.790576  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:13.790597  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:13.805551  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:13.805567  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:13.884689  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:13.884710  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:13.884721  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:13.954356  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:13.954379  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:16.485706  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.496517  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:16.496577  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:16.526348  896760 cri.go:89] found id: ""
	I1208 00:39:16.526363  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.526370  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:16.526376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:16.526459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:16.551935  896760 cri.go:89] found id: ""
	I1208 00:39:16.551949  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.551962  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:16.551968  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:16.552028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:16.576320  896760 cri.go:89] found id: ""
	I1208 00:39:16.576333  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.576340  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:16.576345  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:16.576403  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:16.605756  896760 cri.go:89] found id: ""
	I1208 00:39:16.605770  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.605777  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:16.605783  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:16.605839  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:16.632121  896760 cri.go:89] found id: ""
	I1208 00:39:16.632134  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.632141  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:16.632146  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:16.632203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:16.660423  896760 cri.go:89] found id: ""
	I1208 00:39:16.660437  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.660444  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:16.660450  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:16.660531  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:16.685576  896760 cri.go:89] found id: ""
	I1208 00:39:16.685595  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.685602  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:16.685610  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:16.685620  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:16.740694  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:16.740712  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:16.755790  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:16.755806  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:16.821132  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:16.821152  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:16.821164  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:16.887057  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:16.887076  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:19.418598  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.428681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:19.428748  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:19.455940  896760 cri.go:89] found id: ""
	I1208 00:39:19.455953  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.455961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:19.455966  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:19.456027  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:19.482046  896760 cri.go:89] found id: ""
	I1208 00:39:19.482060  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.482067  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:19.482073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:19.482130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:19.510706  896760 cri.go:89] found id: ""
	I1208 00:39:19.510720  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.510728  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:19.510733  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:19.510792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:19.535505  896760 cri.go:89] found id: ""
	I1208 00:39:19.535520  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.535528  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:19.535533  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:19.535601  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:19.560234  896760 cri.go:89] found id: ""
	I1208 00:39:19.560248  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.560255  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:19.560261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:19.560328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:19.584606  896760 cri.go:89] found id: ""
	I1208 00:39:19.584621  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.584629  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:19.584637  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:19.584695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:19.613195  896760 cri.go:89] found id: ""
	I1208 00:39:19.613226  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.613234  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:19.613242  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:19.613252  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:19.670165  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:19.670184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:19.685327  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:19.685351  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:19.749894  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:19.749914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:19.749928  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:19.812758  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:19.812779  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:22.352520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.362719  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:22.362790  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:22.387649  896760 cri.go:89] found id: ""
	I1208 00:39:22.387662  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.387669  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:22.387675  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:22.387734  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:22.416444  896760 cri.go:89] found id: ""
	I1208 00:39:22.416458  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.416465  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:22.416470  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:22.416538  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:22.442291  896760 cri.go:89] found id: ""
	I1208 00:39:22.442305  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.442312  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:22.442317  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:22.442377  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:22.466919  896760 cri.go:89] found id: ""
	I1208 00:39:22.466933  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.466940  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:22.466945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:22.467011  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:22.492435  896760 cri.go:89] found id: ""
	I1208 00:39:22.492449  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.492456  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:22.492461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:22.492526  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:22.518157  896760 cri.go:89] found id: ""
	I1208 00:39:22.518183  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.518190  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:22.518197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:22.518266  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:22.544341  896760 cri.go:89] found id: ""
	I1208 00:39:22.544356  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.544363  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:22.544371  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:22.544389  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:22.601655  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:22.601676  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:22.617670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:22.617700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:22.686714  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:22.686725  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:22.686736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:22.749600  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:22.749621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.281783  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.292163  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:25.292227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:25.316234  896760 cri.go:89] found id: ""
	I1208 00:39:25.316249  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.316257  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:25.316262  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:25.316330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:25.350433  896760 cri.go:89] found id: ""
	I1208 00:39:25.350478  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.350485  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:25.350491  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:25.350562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:25.376982  896760 cri.go:89] found id: ""
	I1208 00:39:25.376996  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.377004  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:25.377009  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:25.377076  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:25.402484  896760 cri.go:89] found id: ""
	I1208 00:39:25.402499  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.402506  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:25.402511  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:25.402580  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:25.429596  896760 cri.go:89] found id: ""
	I1208 00:39:25.429611  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.429618  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:25.429624  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:25.429692  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:25.455037  896760 cri.go:89] found id: ""
	I1208 00:39:25.455051  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.455059  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:25.455064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:25.455130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:25.484391  896760 cri.go:89] found id: ""
	I1208 00:39:25.484404  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.484412  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:25.484420  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:25.484430  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.512262  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:25.512282  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:25.569524  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:25.569543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:25.584301  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:25.584316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:25.650571  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:25.650583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:25.650594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.218586  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.229069  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:28.229127  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:28.254473  896760 cri.go:89] found id: ""
	I1208 00:39:28.254487  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.254494  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:28.254499  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:28.254563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:28.283388  896760 cri.go:89] found id: ""
	I1208 00:39:28.283403  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.283410  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:28.283418  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:28.283475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:28.310968  896760 cri.go:89] found id: ""
	I1208 00:39:28.310983  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.310990  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:28.310995  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:28.311061  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:28.336049  896760 cri.go:89] found id: ""
	I1208 00:39:28.336064  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.336072  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:28.336078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:28.336141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:28.360451  896760 cri.go:89] found id: ""
	I1208 00:39:28.360464  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.360470  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:28.360475  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:28.360542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:28.385117  896760 cri.go:89] found id: ""
	I1208 00:39:28.385131  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.385138  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:28.385143  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:28.385196  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:28.408915  896760 cri.go:89] found id: ""
	I1208 00:39:28.408928  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.408935  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:28.408943  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:28.408953  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:28.423316  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:28.423332  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:28.486812  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:28.486823  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:28.486833  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.553325  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:28.553344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:28.582011  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:28.582027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.143204  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.154196  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:31.154264  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:31.181624  896760 cri.go:89] found id: ""
	I1208 00:39:31.181638  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.181645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:31.181650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:31.181713  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:31.207658  896760 cri.go:89] found id: ""
	I1208 00:39:31.207672  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.207679  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:31.207684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:31.207742  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:31.233323  896760 cri.go:89] found id: ""
	I1208 00:39:31.233338  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.233345  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:31.233351  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:31.233411  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:31.258320  896760 cri.go:89] found id: ""
	I1208 00:39:31.258335  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.258342  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:31.258347  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:31.258406  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:31.283846  896760 cri.go:89] found id: ""
	I1208 00:39:31.283860  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.283868  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:31.283873  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:31.283931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:31.310064  896760 cri.go:89] found id: ""
	I1208 00:39:31.310079  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.310086  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:31.310091  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:31.310149  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:31.337328  896760 cri.go:89] found id: ""
	I1208 00:39:31.337350  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.337358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:31.337367  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:31.337377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.392950  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:31.392969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:31.407922  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:31.407939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:31.474904  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:31.474915  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:31.474925  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:31.536814  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:31.536834  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:34.069082  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:34.079471  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:34.079532  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:34.121832  896760 cri.go:89] found id: ""
	I1208 00:39:34.121846  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.121853  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:34.121859  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:34.121923  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:34.151527  896760 cri.go:89] found id: ""
	I1208 00:39:34.151541  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.151548  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:34.151553  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:34.151613  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:34.179098  896760 cri.go:89] found id: ""
	I1208 00:39:34.179113  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.179121  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:34.179126  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:34.179184  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:34.209529  896760 cri.go:89] found id: ""
	I1208 00:39:34.209548  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.209563  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:34.209568  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:34.209655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:34.238235  896760 cri.go:89] found id: ""
	I1208 00:39:34.238249  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.238256  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:34.238261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:34.238318  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:34.263739  896760 cri.go:89] found id: ""
	I1208 00:39:34.263752  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.263760  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:34.263765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:34.263838  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:34.293315  896760 cri.go:89] found id: ""
	I1208 00:39:34.293330  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.293337  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:34.293345  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:34.293356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:34.348849  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:34.348873  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:34.363941  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:34.363958  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:34.430475  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:34.430487  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:34.430501  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:34.492396  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:34.492415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.025457  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:37.036130  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:37.036201  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:37.063581  896760 cri.go:89] found id: ""
	I1208 00:39:37.063595  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.063602  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:37.063609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:37.063672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:37.088301  896760 cri.go:89] found id: ""
	I1208 00:39:37.088320  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.088328  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:37.088334  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:37.088395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:37.124388  896760 cri.go:89] found id: ""
	I1208 00:39:37.124402  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.124409  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:37.124417  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:37.124474  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:37.154807  896760 cri.go:89] found id: ""
	I1208 00:39:37.154821  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.154838  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:37.154843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:37.154912  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:37.180191  896760 cri.go:89] found id: ""
	I1208 00:39:37.180204  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.180212  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:37.180217  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:37.180279  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:37.205379  896760 cri.go:89] found id: ""
	I1208 00:39:37.205394  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.205402  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:37.205408  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:37.205487  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:37.233231  896760 cri.go:89] found id: ""
	I1208 00:39:37.233245  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.233264  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:37.233271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:37.233280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:37.297690  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:37.297709  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.325655  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:37.325682  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:37.385822  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:37.385841  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:37.400660  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:37.400685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:37.463113  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:39.963375  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:39.974152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:39.974214  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:40.011461  896760 cri.go:89] found id: ""
	I1208 00:39:40.011477  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.011485  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:40.011492  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:40.011588  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:40.050774  896760 cri.go:89] found id: ""
	I1208 00:39:40.050789  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.050810  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:40.050819  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:40.050895  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:40.078693  896760 cri.go:89] found id: ""
	I1208 00:39:40.078712  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.078737  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:40.078743  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:40.078832  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:40.119774  896760 cri.go:89] found id: ""
	I1208 00:39:40.119787  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.119806  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:40.119812  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:40.119870  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:40.150656  896760 cri.go:89] found id: ""
	I1208 00:39:40.150682  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.150689  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:40.150694  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:40.150761  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:40.182218  896760 cri.go:89] found id: ""
	I1208 00:39:40.182233  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.182247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:40.182253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:40.182329  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:40.212756  896760 cri.go:89] found id: ""
	I1208 00:39:40.212770  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.212778  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:40.212786  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:40.212796  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:40.271111  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:40.271135  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:40.286128  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:40.286144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:40.350612  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:40.350622  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:40.350633  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:40.413198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:40.413217  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:42.941473  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:42.951830  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:42.951896  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:42.977279  896760 cri.go:89] found id: ""
	I1208 00:39:42.977294  896760 logs.go:282] 0 containers: []
	W1208 00:39:42.977303  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:42.977309  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:42.977378  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:43.005862  896760 cri.go:89] found id: ""
	I1208 00:39:43.005878  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.005886  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:43.005891  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:43.006072  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:43.033594  896760 cri.go:89] found id: ""
	I1208 00:39:43.033609  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.033616  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:43.033621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:43.033700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:43.058971  896760 cri.go:89] found id: ""
	I1208 00:39:43.058986  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.058993  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:43.058999  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:43.059056  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:43.084568  896760 cri.go:89] found id: ""
	I1208 00:39:43.084582  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.084590  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:43.084595  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:43.084657  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:43.121795  896760 cri.go:89] found id: ""
	I1208 00:39:43.121810  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.121818  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:43.121823  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:43.121884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:43.151337  896760 cri.go:89] found id: ""
	I1208 00:39:43.151351  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.151358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:43.151365  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:43.151375  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:43.212011  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:43.212032  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:43.227510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:43.227526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:43.293650  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:43.293672  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:43.293684  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:43.355405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:43.355425  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:45.883665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:45.894220  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:45.894287  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:45.919120  896760 cri.go:89] found id: ""
	I1208 00:39:45.919134  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.919141  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:45.919147  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:45.919202  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:45.944078  896760 cri.go:89] found id: ""
	I1208 00:39:45.944092  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.944100  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:45.944105  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:45.944171  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:45.969419  896760 cri.go:89] found id: ""
	I1208 00:39:45.969433  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.969440  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:45.969445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:45.969504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:45.999721  896760 cri.go:89] found id: ""
	I1208 00:39:45.999736  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.999744  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:45.999749  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:45.999807  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:46.027671  896760 cri.go:89] found id: ""
	I1208 00:39:46.027685  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.027697  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:46.027705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:46.027763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:46.053035  896760 cri.go:89] found id: ""
	I1208 00:39:46.053050  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.053058  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:46.053064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:46.053124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:46.077745  896760 cri.go:89] found id: ""
	I1208 00:39:46.077759  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.077767  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:46.077775  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:46.077786  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:46.137068  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:46.137086  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:46.153304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:46.153320  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:46.226313  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:46.226334  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:46.226345  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:46.290116  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:46.290137  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:48.819903  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:48.830265  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:48.830328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:48.855396  896760 cri.go:89] found id: ""
	I1208 00:39:48.855411  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.855418  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:48.855423  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:48.855483  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:48.880269  896760 cri.go:89] found id: ""
	I1208 00:39:48.880282  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.880289  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:48.880294  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:48.880353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:48.904626  896760 cri.go:89] found id: ""
	I1208 00:39:48.904641  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.904648  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:48.904653  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:48.904715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:48.930484  896760 cri.go:89] found id: ""
	I1208 00:39:48.930511  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.930519  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:48.930528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:48.930609  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:48.956159  896760 cri.go:89] found id: ""
	I1208 00:39:48.956173  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.956180  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:48.956185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:48.956243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:48.984643  896760 cri.go:89] found id: ""
	I1208 00:39:48.984657  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.984664  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:48.984670  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:48.984737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:49.012694  896760 cri.go:89] found id: ""
	I1208 00:39:49.012708  896760 logs.go:282] 0 containers: []
	W1208 00:39:49.012716  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:49.012724  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:49.012736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:49.042898  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:49.042915  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:49.099079  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:49.099099  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:49.118877  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:49.118895  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:49.190253  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:49.190263  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:49.190273  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:51.751406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:51.761914  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:51.761973  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:51.788354  896760 cri.go:89] found id: ""
	I1208 00:39:51.788367  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.788375  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:51.788381  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:51.788441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:51.816636  896760 cri.go:89] found id: ""
	I1208 00:39:51.816651  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.816658  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:51.816664  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:51.816735  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:51.842160  896760 cri.go:89] found id: ""
	I1208 00:39:51.842174  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.842181  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:51.842187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:51.842249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:51.867343  896760 cri.go:89] found id: ""
	I1208 00:39:51.867358  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.867365  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:51.867371  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:51.867432  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:51.891589  896760 cri.go:89] found id: ""
	I1208 00:39:51.891604  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.891611  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:51.891616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:51.891681  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:51.915982  896760 cri.go:89] found id: ""
	I1208 00:39:51.915997  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.916016  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:51.916023  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:51.916081  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:51.940386  896760 cri.go:89] found id: ""
	I1208 00:39:51.940399  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.940406  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:51.940414  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:51.940424  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:51.995386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:51.995404  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:52.011670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:52.011689  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:52.085018  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:52.085029  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:52.085041  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:52.155066  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:52.155085  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.698041  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:54.708958  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:54.709024  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:54.734899  896760 cri.go:89] found id: ""
	I1208 00:39:54.734913  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.734921  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:54.734926  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:54.734985  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:54.761966  896760 cri.go:89] found id: ""
	I1208 00:39:54.761981  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.761988  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:54.761993  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:54.762052  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:54.787505  896760 cri.go:89] found id: ""
	I1208 00:39:54.787519  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.787526  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:54.787532  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:54.787595  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:54.813125  896760 cri.go:89] found id: ""
	I1208 00:39:54.813139  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.813147  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:54.813152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:54.813212  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:54.840170  896760 cri.go:89] found id: ""
	I1208 00:39:54.840185  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.840193  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:54.840198  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:54.840269  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:54.865780  896760 cri.go:89] found id: ""
	I1208 00:39:54.865794  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.865801  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:54.865807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:54.865867  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:54.890971  896760 cri.go:89] found id: ""
	I1208 00:39:54.890992  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.891000  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:54.891007  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:54.891017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:54.953695  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:54.953715  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.985753  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:54.985770  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:55.051156  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:55.051176  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:55.066530  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:55.066547  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:55.148075  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:57.649726  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:57.660051  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:57.660109  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:57.685992  896760 cri.go:89] found id: ""
	I1208 00:39:57.686008  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.686015  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:57.686022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:57.686165  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:57.711195  896760 cri.go:89] found id: ""
	I1208 00:39:57.711209  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.711216  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:57.711224  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:57.711285  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:57.735850  896760 cri.go:89] found id: ""
	I1208 00:39:57.735864  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.735871  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:57.735877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:57.735936  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:57.761018  896760 cri.go:89] found id: ""
	I1208 00:39:57.761032  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.761040  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:57.761045  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:57.761110  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:57.787523  896760 cri.go:89] found id: ""
	I1208 00:39:57.787537  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.787544  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:57.787550  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:57.787607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:57.813621  896760 cri.go:89] found id: ""
	I1208 00:39:57.813641  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.813648  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:57.813654  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:57.813717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:57.837687  896760 cri.go:89] found id: ""
	I1208 00:39:57.837700  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.837707  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:57.837715  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:57.837725  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:57.901756  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:57.901780  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:57.931916  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:57.931943  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:57.989769  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:57.989791  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:58.005304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:58.005324  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:58.084868  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:00.590352  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:00.608394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:00.608458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:00.646794  896760 cri.go:89] found id: ""
	I1208 00:40:00.646810  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.646818  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:00.646825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:00.646893  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:00.722151  896760 cri.go:89] found id: ""
	I1208 00:40:00.722167  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.722175  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:00.722180  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:00.722252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:00.751689  896760 cri.go:89] found id: ""
	I1208 00:40:00.751705  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.751713  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:00.751720  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:00.751795  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:00.782551  896760 cri.go:89] found id: ""
	I1208 00:40:00.782577  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.782586  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:00.782593  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:00.782674  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:00.813259  896760 cri.go:89] found id: ""
	I1208 00:40:00.813275  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.813282  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:00.813287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:00.813353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:00.843171  896760 cri.go:89] found id: ""
	I1208 00:40:00.843193  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.843201  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:00.843206  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:00.843270  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:00.872240  896760 cri.go:89] found id: ""
	I1208 00:40:00.872266  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.872275  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:00.872283  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:00.872297  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:00.933096  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:00.933116  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:00.949661  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:00.949685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:01.022088  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:01.022099  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:01.022112  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:01.087987  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:01.088007  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.623088  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:03.637929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:03.637992  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:03.667258  896760 cri.go:89] found id: ""
	I1208 00:40:03.667272  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.667280  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:03.667286  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:03.667347  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:03.704022  896760 cri.go:89] found id: ""
	I1208 00:40:03.704035  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.704042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:03.704048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:03.704115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:03.733401  896760 cri.go:89] found id: ""
	I1208 00:40:03.733416  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.733423  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:03.733428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:03.733489  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:03.760028  896760 cri.go:89] found id: ""
	I1208 00:40:03.760042  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.760049  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:03.760054  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:03.760113  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:03.784849  896760 cri.go:89] found id: ""
	I1208 00:40:03.784864  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.784871  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:03.784877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:03.784934  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:03.809615  896760 cri.go:89] found id: ""
	I1208 00:40:03.809629  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.809636  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:03.809642  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:03.809700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:03.834857  896760 cri.go:89] found id: ""
	I1208 00:40:03.834872  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.834879  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:03.834886  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:03.834896  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:03.899301  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:03.899312  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:03.899330  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:03.961403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:03.961422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.990248  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:03.990265  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:04.049257  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:04.049280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.564731  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:06.575277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:06.575339  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:06.603640  896760 cri.go:89] found id: ""
	I1208 00:40:06.603653  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.603662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:06.603668  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:06.603727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:06.632743  896760 cri.go:89] found id: ""
	I1208 00:40:06.632757  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.632764  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:06.632769  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:06.632830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:06.661586  896760 cri.go:89] found id: ""
	I1208 00:40:06.661600  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.661608  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:06.661613  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:06.661675  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:06.686811  896760 cri.go:89] found id: ""
	I1208 00:40:06.686833  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.686840  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:06.686845  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:06.686905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:06.712624  896760 cri.go:89] found id: ""
	I1208 00:40:06.712639  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.712646  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:06.712651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:06.712710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:06.737865  896760 cri.go:89] found id: ""
	I1208 00:40:06.737878  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.737898  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:06.737903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:06.737971  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:06.763555  896760 cri.go:89] found id: ""
	I1208 00:40:06.763569  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.763576  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:06.763583  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:06.763594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:06.820256  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:06.820275  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.835590  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:06.835606  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:06.900244  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:06.900256  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:06.900269  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:06.964553  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:06.964573  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.497887  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:09.511442  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:09.511513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:09.537551  896760 cri.go:89] found id: ""
	I1208 00:40:09.537566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.537573  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:09.537579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:09.537639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:09.564387  896760 cri.go:89] found id: ""
	I1208 00:40:09.564400  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.564408  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:09.564412  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:09.564471  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:09.592551  896760 cri.go:89] found id: ""
	I1208 00:40:09.592566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.592573  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:09.592579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:09.592638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:09.633536  896760 cri.go:89] found id: ""
	I1208 00:40:09.633553  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.633564  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:09.633572  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:09.633644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:09.661685  896760 cri.go:89] found id: ""
	I1208 00:40:09.661700  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.661706  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:09.661711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:09.661773  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:09.689367  896760 cri.go:89] found id: ""
	I1208 00:40:09.689382  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.689390  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:09.689396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:09.689461  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:09.715001  896760 cri.go:89] found id: ""
	I1208 00:40:09.715025  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.715033  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:09.715041  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:09.715052  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.743922  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:09.743939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:09.801833  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:09.801852  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:09.817182  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:09.817199  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:09.885006  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:09.885017  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:09.885028  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.453176  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:12.463998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:12.464059  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:12.489947  896760 cri.go:89] found id: ""
	I1208 00:40:12.489961  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.489968  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:12.489974  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:12.490053  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:12.517571  896760 cri.go:89] found id: ""
	I1208 00:40:12.517586  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.517594  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:12.517601  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:12.517680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:12.549649  896760 cri.go:89] found id: ""
	I1208 00:40:12.549671  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.549679  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:12.549685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:12.549764  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:12.575870  896760 cri.go:89] found id: ""
	I1208 00:40:12.575891  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.575899  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:12.575903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:12.575975  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:12.615650  896760 cri.go:89] found id: ""
	I1208 00:40:12.615664  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.615672  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:12.615677  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:12.615745  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:12.644432  896760 cri.go:89] found id: ""
	I1208 00:40:12.644446  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.644454  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:12.644460  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:12.644536  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:12.671471  896760 cri.go:89] found id: ""
	I1208 00:40:12.671485  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.671492  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:12.671499  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:12.671510  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:12.728175  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:12.728195  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:12.743959  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:12.743975  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:12.816570  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:12.816580  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:12.816591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.879403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:12.879423  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:15.414366  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:15.424841  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:15.424901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:15.450991  896760 cri.go:89] found id: ""
	I1208 00:40:15.451005  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.451012  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:15.451017  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:15.451078  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:15.477340  896760 cri.go:89] found id: ""
	I1208 00:40:15.477354  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.477361  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:15.477366  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:15.477424  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:15.503035  896760 cri.go:89] found id: ""
	I1208 00:40:15.503048  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.503055  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:15.503060  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:15.503125  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:15.527771  896760 cri.go:89] found id: ""
	I1208 00:40:15.527787  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.527794  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:15.527798  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:15.527856  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:15.556600  896760 cri.go:89] found id: ""
	I1208 00:40:15.556627  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.556634  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:15.556639  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:15.556710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:15.582706  896760 cri.go:89] found id: ""
	I1208 00:40:15.582721  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.582728  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:15.582737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:15.582821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:15.628093  896760 cri.go:89] found id: ""
	I1208 00:40:15.628114  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.628121  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:15.628129  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:15.628144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:15.691996  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:15.692026  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:15.707812  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:15.707830  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:15.773396  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:15.773407  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:15.773418  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:15.840937  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:15.840957  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:18.375079  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:18.385866  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:18.385931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:18.412581  896760 cri.go:89] found id: ""
	I1208 00:40:18.412596  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.412603  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:18.412609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:18.412672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:18.443837  896760 cri.go:89] found id: ""
	I1208 00:40:18.443863  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.443871  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:18.443876  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:18.443950  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:18.470522  896760 cri.go:89] found id: ""
	I1208 00:40:18.470549  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.470557  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:18.470565  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:18.470639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:18.500112  896760 cri.go:89] found id: ""
	I1208 00:40:18.500127  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.500136  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:18.500141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:18.500203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:18.528643  896760 cri.go:89] found id: ""
	I1208 00:40:18.528657  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.528666  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:18.528672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:18.528740  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:18.556708  896760 cri.go:89] found id: ""
	I1208 00:40:18.556722  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.556729  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:18.556735  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:18.556799  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:18.586255  896760 cri.go:89] found id: ""
	I1208 00:40:18.586270  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.586277  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:18.586285  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:18.586295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:18.651954  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:18.651974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:18.668271  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:18.668288  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:18.735458  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:18.735469  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:18.735481  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:18.797791  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:18.797811  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.328343  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:21.339006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:21.339068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:21.365940  896760 cri.go:89] found id: ""
	I1208 00:40:21.365954  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.365961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:21.365967  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:21.366028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:21.393056  896760 cri.go:89] found id: ""
	I1208 00:40:21.393071  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.393078  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:21.393083  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:21.393147  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:21.418602  896760 cri.go:89] found id: ""
	I1208 00:40:21.418616  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.418624  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:21.418630  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:21.418689  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:21.444947  896760 cri.go:89] found id: ""
	I1208 00:40:21.444963  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.444970  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:21.444976  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:21.445037  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:21.486428  896760 cri.go:89] found id: ""
	I1208 00:40:21.486461  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.486469  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:21.486476  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:21.486537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:21.516432  896760 cri.go:89] found id: ""
	I1208 00:40:21.516448  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.516455  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:21.516461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:21.516527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:21.542473  896760 cri.go:89] found id: ""
	I1208 00:40:21.542488  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.542501  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:21.542510  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:21.542521  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:21.558088  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:21.558105  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:21.646839  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:21.646850  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:21.646861  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:21.711182  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:21.711203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.739373  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:21.739391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.296477  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:24.307018  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:24.307079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:24.333480  896760 cri.go:89] found id: ""
	I1208 00:40:24.333502  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.333521  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:24.333526  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:24.333587  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:24.359023  896760 cri.go:89] found id: ""
	I1208 00:40:24.359037  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.359044  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:24.359049  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:24.359118  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:24.384337  896760 cri.go:89] found id: ""
	I1208 00:40:24.384351  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.384358  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:24.384363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:24.384425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:24.409687  896760 cri.go:89] found id: ""
	I1208 00:40:24.409702  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.409709  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:24.409714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:24.409774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:24.434605  896760 cri.go:89] found id: ""
	I1208 00:40:24.434620  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.434627  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:24.434633  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:24.434690  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:24.464542  896760 cri.go:89] found id: ""
	I1208 00:40:24.464556  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.464569  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:24.464575  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:24.464638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:24.489131  896760 cri.go:89] found id: ""
	I1208 00:40:24.489145  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.489152  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:24.489159  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:24.489170  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.544278  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:24.544298  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:24.560095  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:24.560152  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:24.637902  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:24.637914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:24.637924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:24.706243  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:24.706262  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.237246  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:27.247681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:27.247744  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:27.272827  896760 cri.go:89] found id: ""
	I1208 00:40:27.272841  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.272848  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:27.272854  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:27.272917  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:27.298021  896760 cri.go:89] found id: ""
	I1208 00:40:27.298035  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.298042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:27.298048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:27.298115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:27.322943  896760 cri.go:89] found id: ""
	I1208 00:40:27.322975  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.322983  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:27.322989  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:27.323049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:27.348507  896760 cri.go:89] found id: ""
	I1208 00:40:27.348522  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.348530  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:27.348535  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:27.348604  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:27.373824  896760 cri.go:89] found id: ""
	I1208 00:40:27.373838  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.373846  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:27.373851  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:27.373911  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:27.399388  896760 cri.go:89] found id: ""
	I1208 00:40:27.399402  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.399409  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:27.399415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:27.399481  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:27.427571  896760 cri.go:89] found id: ""
	I1208 00:40:27.427596  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.427604  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:27.427612  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:27.427621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:27.492713  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:27.492731  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.522269  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:27.522295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:27.582384  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:27.582402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:27.602834  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:27.602850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:27.689958  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:30.190338  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:30.201839  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:30.201909  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:30.228924  896760 cri.go:89] found id: ""
	I1208 00:40:30.228939  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.228956  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:30.228963  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:30.229026  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:30.255337  896760 cri.go:89] found id: ""
	I1208 00:40:30.255351  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.255358  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:30.255363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:30.255425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:30.281566  896760 cri.go:89] found id: ""
	I1208 00:40:30.281581  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.281588  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:30.281594  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:30.281655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:30.308175  896760 cri.go:89] found id: ""
	I1208 00:40:30.308189  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.308197  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:30.308202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:30.308282  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:30.336203  896760 cri.go:89] found id: ""
	I1208 00:40:30.336218  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.336226  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:30.336241  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:30.336302  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:30.368832  896760 cri.go:89] found id: ""
	I1208 00:40:30.368847  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.368855  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:30.368860  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:30.368940  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:30.396840  896760 cri.go:89] found id: ""
	I1208 00:40:30.396855  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.396862  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:30.396870  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:30.396880  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:30.458293  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:30.458313  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:30.489792  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:30.489807  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:30.546970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:30.546989  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:30.561949  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:30.561969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:30.648665  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.148954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:33.159678  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:33.159739  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:33.188692  896760 cri.go:89] found id: ""
	I1208 00:40:33.188707  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.188725  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:33.188731  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:33.188815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:33.214527  896760 cri.go:89] found id: ""
	I1208 00:40:33.214542  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.214550  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:33.214555  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:33.214614  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:33.241307  896760 cri.go:89] found id: ""
	I1208 00:40:33.241323  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.241331  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:33.241336  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:33.241395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:33.267242  896760 cri.go:89] found id: ""
	I1208 00:40:33.267257  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.267265  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:33.267271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:33.267331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:33.293623  896760 cri.go:89] found id: ""
	I1208 00:40:33.293637  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.293645  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:33.293650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:33.293710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:33.319375  896760 cri.go:89] found id: ""
	I1208 00:40:33.319388  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.319395  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:33.319401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:33.319477  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:33.345164  896760 cri.go:89] found id: ""
	I1208 00:40:33.345178  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.345186  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:33.345193  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:33.345203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:33.402766  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:33.402783  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:33.417559  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:33.417576  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:33.484831  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.484841  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:33.484851  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:33.553499  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:33.553527  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.087539  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:36.098484  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:36.098549  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:36.123061  896760 cri.go:89] found id: ""
	I1208 00:40:36.123075  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.123083  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:36.123089  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:36.123150  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:36.152786  896760 cri.go:89] found id: ""
	I1208 00:40:36.152800  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.152807  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:36.152813  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:36.152874  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:36.179122  896760 cri.go:89] found id: ""
	I1208 00:40:36.179137  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.179144  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:36.179150  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:36.179211  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:36.205226  896760 cri.go:89] found id: ""
	I1208 00:40:36.205239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.205247  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:36.205253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:36.205311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:36.231018  896760 cri.go:89] found id: ""
	I1208 00:40:36.231033  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.231040  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:36.231046  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:36.231104  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:36.257226  896760 cri.go:89] found id: ""
	I1208 00:40:36.257239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.257247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:36.257253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:36.257312  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:36.282378  896760 cri.go:89] found id: ""
	I1208 00:40:36.282395  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.282402  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:36.282411  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:36.282422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:36.297365  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:36.297381  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:36.361334  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:36.361345  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:36.361356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:36.425983  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:36.426003  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.458376  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:36.458391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.019300  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:39.030277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:39.030337  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:39.059011  896760 cri.go:89] found id: ""
	I1208 00:40:39.059026  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.059033  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:39.059039  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:39.059099  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:39.084787  896760 cri.go:89] found id: ""
	I1208 00:40:39.084802  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.084809  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:39.084815  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:39.084879  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:39.111166  896760 cri.go:89] found id: ""
	I1208 00:40:39.111179  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.111186  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:39.111192  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:39.111252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:39.140388  896760 cri.go:89] found id: ""
	I1208 00:40:39.140403  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.140410  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:39.140415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:39.140475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:39.165040  896760 cri.go:89] found id: ""
	I1208 00:40:39.165054  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.165062  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:39.165067  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:39.165130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:39.191099  896760 cri.go:89] found id: ""
	I1208 00:40:39.191114  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.191122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:39.191127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:39.191187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:39.215889  896760 cri.go:89] found id: ""
	I1208 00:40:39.215903  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.215910  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:39.215918  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:39.215934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.279738  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:39.279760  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:39.295091  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:39.295108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:39.363341  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:39.363363  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:39.363373  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:39.428022  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:39.428043  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.960665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:41.971071  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:41.971142  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:41.997224  896760 cri.go:89] found id: ""
	I1208 00:40:41.997239  896760 logs.go:282] 0 containers: []
	W1208 00:40:41.997247  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:41.997253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:41.997315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:42.035665  896760 cri.go:89] found id: ""
	I1208 00:40:42.035680  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.035687  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:42.035692  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:42.035758  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:42.064088  896760 cri.go:89] found id: ""
	I1208 00:40:42.064103  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.064111  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:42.064117  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:42.064181  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:42.092740  896760 cri.go:89] found id: ""
	I1208 00:40:42.092757  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.092765  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:42.092771  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:42.092844  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:42.124291  896760 cri.go:89] found id: ""
	I1208 00:40:42.124309  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.124321  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:42.124329  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:42.124428  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:42.155416  896760 cri.go:89] found id: ""
	I1208 00:40:42.155431  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.155439  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:42.155445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:42.155515  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:42.188921  896760 cri.go:89] found id: ""
	I1208 00:40:42.188938  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.188945  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:42.188954  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:42.188965  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:42.249292  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:42.249321  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:42.266137  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:42.266155  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:42.342321  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:42.342333  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:42.342344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:42.406583  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:42.406602  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:44.937561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:44.948618  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:44.948679  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:44.982163  896760 cri.go:89] found id: ""
	I1208 00:40:44.982177  896760 logs.go:282] 0 containers: []
	W1208 00:40:44.982195  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:44.982202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:44.982276  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:45.033982  896760 cri.go:89] found id: ""
	I1208 00:40:45.033999  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.034008  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:45.034014  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:45.034085  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:45.089336  896760 cri.go:89] found id: ""
	I1208 00:40:45.089353  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.089362  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:45.089368  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:45.089437  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:45.132530  896760 cri.go:89] found id: ""
	I1208 00:40:45.132547  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.132555  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:45.132561  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:45.132672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:45.207404  896760 cri.go:89] found id: ""
	I1208 00:40:45.207423  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.207432  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:45.207438  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:45.207516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:45.247451  896760 cri.go:89] found id: ""
	I1208 00:40:45.247477  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.247486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:45.247493  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:45.247562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:45.291347  896760 cri.go:89] found id: ""
	I1208 00:40:45.291363  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.291373  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:45.291382  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:45.291393  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:45.358718  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:45.358739  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:45.375670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:45.375694  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:45.443052  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:45.443063  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:45.443075  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:45.507120  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:45.507142  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.037423  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:48.048528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:48.048599  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:48.078292  896760 cri.go:89] found id: ""
	I1208 00:40:48.078307  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.078314  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:48.078320  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:48.078380  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:48.103852  896760 cri.go:89] found id: ""
	I1208 00:40:48.103867  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.103874  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:48.103879  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:48.103938  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:48.129348  896760 cri.go:89] found id: ""
	I1208 00:40:48.129364  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.129371  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:48.129376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:48.129434  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:48.154375  896760 cri.go:89] found id: ""
	I1208 00:40:48.154390  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.154397  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:48.154402  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:48.154497  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:48.180043  896760 cri.go:89] found id: ""
	I1208 00:40:48.180058  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.180065  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:48.180070  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:48.180126  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:48.208497  896760 cri.go:89] found id: ""
	I1208 00:40:48.208511  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.208518  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:48.208524  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:48.208582  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:48.236937  896760 cri.go:89] found id: ""
	I1208 00:40:48.236960  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.236968  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:48.236975  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:48.236985  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:48.252020  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:48.252037  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:48.317246  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:48.317257  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:48.317267  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:48.381926  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:48.381947  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.410384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:48.410402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:50.965799  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:50.977456  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:50.977516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:51.008659  896760 cri.go:89] found id: ""
	I1208 00:40:51.008677  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.008685  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:51.008691  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:51.008763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:51.043130  896760 cri.go:89] found id: ""
	I1208 00:40:51.043144  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.043151  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:51.043157  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:51.043217  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:51.071991  896760 cri.go:89] found id: ""
	I1208 00:40:51.072014  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.072022  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:51.072028  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:51.072091  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:51.098639  896760 cri.go:89] found id: ""
	I1208 00:40:51.098654  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.098661  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:51.098667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:51.098727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:51.125133  896760 cri.go:89] found id: ""
	I1208 00:40:51.125147  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.125154  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:51.125159  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:51.125220  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:51.152232  896760 cri.go:89] found id: ""
	I1208 00:40:51.152247  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.152255  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:51.152271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:51.152333  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:51.181299  896760 cri.go:89] found id: ""
	I1208 00:40:51.181313  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.181321  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:51.181329  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:51.181339  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:51.243933  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:51.243955  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:51.272384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:51.272400  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:51.334024  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:51.334042  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:51.349155  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:51.349172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:51.419857  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:53.920516  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:53.931349  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:53.931410  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:53.960789  896760 cri.go:89] found id: ""
	I1208 00:40:53.960805  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.960816  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:53.960821  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:53.960887  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:53.991352  896760 cri.go:89] found id: ""
	I1208 00:40:53.991368  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.991376  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:53.991382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:53.991452  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:54.024088  896760 cri.go:89] found id: ""
	I1208 00:40:54.024103  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.024117  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:54.024123  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:54.024187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:54.051247  896760 cri.go:89] found id: ""
	I1208 00:40:54.051262  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.051269  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:54.051274  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:54.051335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:54.077953  896760 cri.go:89] found id: ""
	I1208 00:40:54.077968  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.077975  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:54.077985  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:54.078051  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:54.104672  896760 cri.go:89] found id: ""
	I1208 00:40:54.104686  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.104693  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:54.104699  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:54.104757  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:54.129936  896760 cri.go:89] found id: ""
	I1208 00:40:54.129950  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.129957  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:54.129965  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:54.129976  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:54.190590  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:54.190610  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:54.206141  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:54.206158  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:54.274636  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:54.274647  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:54.274658  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:54.343673  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:54.343693  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:56.875691  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:56.887842  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:56.887906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:56.916158  896760 cri.go:89] found id: ""
	I1208 00:40:56.916172  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.916179  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:56.916185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:56.916243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:56.940916  896760 cri.go:89] found id: ""
	I1208 00:40:56.940930  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.940937  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:56.940942  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:56.941002  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:56.964346  896760 cri.go:89] found id: ""
	I1208 00:40:56.964361  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.964368  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:56.964373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:56.964431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:56.989502  896760 cri.go:89] found id: ""
	I1208 00:40:56.989516  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.989523  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:56.989528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:56.989590  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:57.017437  896760 cri.go:89] found id: ""
	I1208 00:40:57.017452  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.017459  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:57.017465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:57.017527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:57.044860  896760 cri.go:89] found id: ""
	I1208 00:40:57.044873  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.044880  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:57.044886  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:57.044943  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:57.070028  896760 cri.go:89] found id: ""
	I1208 00:40:57.070043  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.070050  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:57.070058  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:57.070069  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:57.133938  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:57.133960  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:57.163813  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:57.163828  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:57.219970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:57.219990  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:57.234793  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:57.234810  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:57.297123  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:59.797409  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:59.807447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:59.807521  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:59.831111  896760 cri.go:89] found id: ""
	I1208 00:40:59.831126  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.831139  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:59.831145  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:59.831204  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:59.862164  896760 cri.go:89] found id: ""
	I1208 00:40:59.862178  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.862185  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:59.862190  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:59.862245  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:59.913907  896760 cri.go:89] found id: ""
	I1208 00:40:59.913921  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.913928  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:59.913933  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:59.913990  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:59.938219  896760 cri.go:89] found id: ""
	I1208 00:40:59.938235  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.938242  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:59.938247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:59.938309  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:59.965447  896760 cri.go:89] found id: ""
	I1208 00:40:59.965460  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.965479  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:59.965485  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:59.965551  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:59.989806  896760 cri.go:89] found id: ""
	I1208 00:40:59.989820  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.989827  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:59.989833  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:59.989891  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:00.115094  896760 cri.go:89] found id: ""
	I1208 00:41:00.115110  896760 logs.go:282] 0 containers: []
	W1208 00:41:00.115118  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:00.115126  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:00.115138  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:00.211003  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:00.211027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:00.261522  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:00.261543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:00.334293  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:00.334316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:00.381440  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:00.381465  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:00.482780  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:02.983027  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:02.993616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:02.993677  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:03.021098  896760 cri.go:89] found id: ""
	I1208 00:41:03.021114  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.021122  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:03.021128  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:03.021189  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:03.047499  896760 cri.go:89] found id: ""
	I1208 00:41:03.047521  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.047528  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:03.047534  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:03.047594  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:03.072719  896760 cri.go:89] found id: ""
	I1208 00:41:03.072749  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.072757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:03.072762  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:03.072841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:03.098912  896760 cri.go:89] found id: ""
	I1208 00:41:03.098927  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.098934  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:03.098939  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:03.099001  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:03.125225  896760 cri.go:89] found id: ""
	I1208 00:41:03.125239  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.125247  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:03.125252  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:03.125311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:03.151371  896760 cri.go:89] found id: ""
	I1208 00:41:03.151384  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.151392  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:03.151397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:03.151457  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:03.176410  896760 cri.go:89] found id: ""
	I1208 00:41:03.176424  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.176432  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:03.176439  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:03.176450  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:03.231731  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:03.231750  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:03.246857  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:03.246874  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:03.313632  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:03.313651  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:03.313662  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:03.381170  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:03.381190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:05.911707  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:05.922187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:05.922249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:05.946678  896760 cri.go:89] found id: ""
	I1208 00:41:05.946692  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.946698  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:05.946704  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:05.946760  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:05.971332  896760 cri.go:89] found id: ""
	I1208 00:41:05.971344  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.971351  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:05.971357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:05.971418  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:05.996173  896760 cri.go:89] found id: ""
	I1208 00:41:05.996187  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.996194  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:05.996200  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:05.996257  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:06.029475  896760 cri.go:89] found id: ""
	I1208 00:41:06.029489  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.029497  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:06.029502  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:06.029578  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:06.058996  896760 cri.go:89] found id: ""
	I1208 00:41:06.059009  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.059017  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:06.059022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:06.059079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:06.083207  896760 cri.go:89] found id: ""
	I1208 00:41:06.083220  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.083227  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:06.083233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:06.083301  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:06.108817  896760 cri.go:89] found id: ""
	I1208 00:41:06.108831  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.108848  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:06.108856  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:06.108867  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:06.124009  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:06.124027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:06.189487  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:06.189497  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:06.189509  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:06.253352  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:06.253372  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:06.285932  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:06.285948  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:08.842570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:08.854529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:08.854589  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:08.884339  896760 cri.go:89] found id: ""
	I1208 00:41:08.884355  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.884362  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:08.884367  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:08.884427  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:08.914891  896760 cri.go:89] found id: ""
	I1208 00:41:08.914905  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.914924  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:08.914929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:08.914998  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:08.941436  896760 cri.go:89] found id: ""
	I1208 00:41:08.941452  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.941459  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:08.941465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:08.941535  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:08.966802  896760 cri.go:89] found id: ""
	I1208 00:41:08.966816  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.966823  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:08.966829  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:08.966890  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:09.002946  896760 cri.go:89] found id: ""
	I1208 00:41:09.002962  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.002971  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:09.002977  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:09.003049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:09.031184  896760 cri.go:89] found id: ""
	I1208 00:41:09.031199  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.031207  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:09.031213  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:09.031288  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:09.055946  896760 cri.go:89] found id: ""
	I1208 00:41:09.055971  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.055979  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:09.055987  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:09.055997  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:09.121830  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:09.121850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:09.150682  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:09.150700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:09.214609  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:09.214636  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:09.230018  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:09.230035  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:09.298095  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:11.798922  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:11.809212  896760 kubeadm.go:602] duration metric: took 4m1.466236852s to restartPrimaryControlPlane
	W1208 00:41:11.809278  896760 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:41:11.810440  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:41:12.224260  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:41:12.238539  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:41:12.247058  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:41:12.247114  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:41:12.255525  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:41:12.255534  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:41:12.255586  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:41:12.263892  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:41:12.263953  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:41:12.271955  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:41:12.280091  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:41:12.280149  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:41:12.288143  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.296120  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:41:12.296196  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.303946  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:41:12.312368  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:41:12.312423  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:41:12.320463  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:41:12.364373  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:41:12.364695  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:41:12.438406  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:41:12.438492  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:41:12.438531  896760 kubeadm.go:319] OS: Linux
	I1208 00:41:12.438577  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:41:12.438625  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:41:12.438672  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:41:12.438719  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:41:12.438766  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:41:12.438813  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:41:12.438857  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:41:12.438904  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:41:12.438949  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:41:12.514836  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:41:12.514942  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:41:12.515034  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:41:12.521560  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:41:12.527008  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:41:12.527099  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:41:12.527164  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:41:12.527241  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:41:12.527300  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:41:12.527369  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:41:12.527423  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:41:12.527485  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:41:12.527544  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:41:12.527617  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:41:12.527688  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:41:12.527724  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:41:12.527778  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:41:13.245010  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:41:13.299392  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:41:13.614595  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:41:13.963710  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:41:14.175279  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:41:14.176043  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:41:14.180186  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:41:14.183629  896760 out.go:252]   - Booting up control plane ...
	I1208 00:41:14.183729  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:41:14.183806  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:41:14.184436  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:41:14.204887  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:41:14.204990  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:41:14.213421  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:41:14.213704  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:41:14.213908  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:41:14.352082  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:41:14.352289  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:45:14.352397  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00008019s
	I1208 00:45:14.352432  896760 kubeadm.go:319] 
	I1208 00:45:14.352488  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:45:14.352520  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:45:14.352633  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:45:14.352639  896760 kubeadm.go:319] 
	I1208 00:45:14.352742  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:45:14.352774  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:45:14.352803  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:45:14.352807  896760 kubeadm.go:319] 
	I1208 00:45:14.356965  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:45:14.357429  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:45:14.357540  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:45:14.357802  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 00:45:14.357807  896760 kubeadm.go:319] 
	I1208 00:45:14.357875  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:45:14.357995  896760 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00008019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:45:14.358087  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:45:14.770086  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:45:14.783732  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:45:14.783788  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:45:14.791646  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:45:14.791657  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:45:14.791710  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:45:14.799512  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:45:14.799569  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:45:14.807303  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:45:14.815223  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:45:14.815280  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:45:14.822916  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.831219  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:45:14.831274  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.838751  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:45:14.846479  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:45:14.846535  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:45:14.855105  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:45:14.892727  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:45:14.893019  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:45:14.958827  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:45:14.958888  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:45:14.958921  896760 kubeadm.go:319] OS: Linux
	I1208 00:45:14.958963  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:45:14.959008  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:45:14.959052  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:45:14.959097  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:45:14.959143  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:45:14.959192  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:45:14.959234  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:45:14.959279  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:45:14.959321  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:45:15.063986  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:45:15.064091  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:45:15.064182  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:45:15.072119  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:45:15.073836  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:45:15.073929  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:45:15.073997  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:45:15.074078  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:45:15.074847  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:45:15.074919  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:45:15.074970  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:45:15.075029  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:45:15.075086  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:45:15.075260  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:45:15.075466  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:45:15.075788  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:45:15.075847  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:45:15.207541  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:45:15.419182  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:45:15.708081  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:45:15.925468  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:45:16.152957  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:45:16.153669  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:45:16.156472  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:45:16.157817  896760 out.go:252]   - Booting up control plane ...
	I1208 00:45:16.157909  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:45:16.157987  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:45:16.159025  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:45:16.179954  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:45:16.180052  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:45:16.189229  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:45:16.190665  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:45:16.190709  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:45:16.336970  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:45:16.337083  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:49:16.337272  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305556s
	I1208 00:49:16.337296  896760 kubeadm.go:319] 
	I1208 00:49:16.337409  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:49:16.337518  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:49:16.337839  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:49:16.337849  896760 kubeadm.go:319] 
	I1208 00:49:16.338164  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:49:16.338221  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:49:16.338281  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:49:16.338285  896760 kubeadm.go:319] 
	I1208 00:49:16.344611  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:49:16.345152  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:49:16.345268  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:49:16.345632  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:49:16.345641  896760 kubeadm.go:319] 
	I1208 00:49:16.345722  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:49:16.345780  896760 kubeadm.go:403] duration metric: took 12m6.045651138s to StartCluster
	I1208 00:49:16.345820  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:49:16.345897  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:49:16.378062  896760 cri.go:89] found id: ""
	I1208 00:49:16.378080  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.378088  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:49:16.378094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:49:16.378167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:49:16.414010  896760 cri.go:89] found id: ""
	I1208 00:49:16.414024  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.414043  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:49:16.414048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:49:16.414116  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:49:16.441708  896760 cri.go:89] found id: ""
	I1208 00:49:16.441732  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.441739  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:49:16.441745  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:49:16.441816  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:49:16.469812  896760 cri.go:89] found id: ""
	I1208 00:49:16.469826  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.469833  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:49:16.469848  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:49:16.469906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:49:16.495155  896760 cri.go:89] found id: ""
	I1208 00:49:16.495170  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.495177  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:49:16.495183  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:49:16.495242  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:49:16.522141  896760 cri.go:89] found id: ""
	I1208 00:49:16.522155  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.522163  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:49:16.522168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:49:16.522227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:49:16.551643  896760 cri.go:89] found id: ""
	I1208 00:49:16.551656  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.551663  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:49:16.551671  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:49:16.551681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:49:16.614342  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:49:16.614362  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:49:16.644124  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:49:16.644140  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:49:16.703646  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:49:16.703665  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:49:16.718513  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:49:16.718530  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:49:16.782371  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 00:49:16.782383  896760 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:49:16.782409  896760 out.go:285] * 
	W1208 00:49:16.782515  896760 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.782535  896760 out.go:285] * 
	W1208 00:49:16.784660  896760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:49:16.789524  896760 out.go:203] 
	W1208 00:49:16.792367  896760 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.792413  896760 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:49:16.792436  896760 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:49:16.795713  896760 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919395045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919406500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919452219Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919473840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919487707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919499637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919508720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919528249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919545578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919576815Z" level=info msg="Connect containerd service"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919974424Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.920657812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935258404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935352461Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935383608Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935425134Z" level=info msg="Start recovering state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981163284Z" level=info msg="Start event monitor"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981372805Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981441769Z" level=info msg="Start streaming server"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981512023Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981572914Z" level=info msg="runtime interface starting up..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981643085Z" level=info msg="starting plugins..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981710277Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981908794Z" level=info msg="containerd successfully booted in 0.086733s"
	Dec 08 00:37:08 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:49:18.027721   21059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:18.028201   21059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:18.029938   21059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:18.030362   21059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:18.031981   21059 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:49:18 up  5:31,  0 user,  load average: 0.01, 0.15, 0.57
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:49:14 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:15 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 08 00:49:15 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:15 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:15 functional-386544 kubelet[20863]: E1208 00:49:15.635852   20863 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:15 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:15 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:16 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 00:49:16 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:16 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:16 functional-386544 kubelet[20869]: E1208 00:49:16.406689   20869 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:16 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:16 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 00:49:17 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:17 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:17 functional-386544 kubelet[20970]: E1208 00:49:17.142689   20970 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 00:49:17 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:17 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:17 functional-386544 kubelet[21029]: E1208 00:49:17.910015   21029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (360.845536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (733.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-386544 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-386544 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (60.733263ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-386544 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (336.044388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-932121 image ls --format yaml --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format json --alsologtostderr                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls --format table --alsologtostderr                                                                                             │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ ssh     │ functional-932121 ssh pgrep buildkitd                                                                                                                   │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ image   │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr                                                  │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ image   │ functional-932121 image ls                                                                                                                              │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ delete  │ -p functional-932121                                                                                                                                    │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
	│ start   │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │                     │
	│ start   │ -p functional-386544 --alsologtostderr -v=8                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:30 UTC │                     │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add registry.k8s.io/pause:latest                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache add minikube-local-cache-test:functional-386544                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ functional-386544 cache delete minikube-local-cache-test:functional-386544                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl images                                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:36 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │                     │
	│ cache   │ functional-386544 cache reload                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:37 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ kubectl │ functional-386544 kubectl -- --context functional-386544 get pods                                                                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	│ start   │ -p functional-386544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:37:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:37:06.019721  896760 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:37:06.019851  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.019855  896760 out.go:374] Setting ErrFile to fd 2...
	I1208 00:37:06.019858  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.020163  896760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:37:06.020664  896760 out.go:368] Setting JSON to false
	I1208 00:37:06.021613  896760 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":19179,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:37:06.021695  896760 start.go:143] virtualization:  
	I1208 00:37:06.025173  896760 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:37:06.029087  896760 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:37:06.029181  896760 notify.go:221] Checking for updates...
	I1208 00:37:06.035043  896760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:37:06.037984  896760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:37:06.041170  896760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:37:06.044080  896760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:37:06.047053  896760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:37:06.050554  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:06.050663  896760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:37:06.082313  896760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:37:06.082426  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.147928  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.138471154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.148036  896760 docker.go:319] overlay module found
	I1208 00:37:06.151011  896760 out.go:179] * Using the docker driver based on existing profile
	I1208 00:37:06.153817  896760 start.go:309] selected driver: docker
	I1208 00:37:06.153826  896760 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.153925  896760 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:37:06.154035  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.211588  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.202265066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.212013  896760 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:37:06.212038  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:06.212099  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:06.212152  896760 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.217244  896760 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:37:06.220210  896760 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:37:06.223461  896760 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:37:06.226522  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:06.226581  896760 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:37:06.226589  896760 cache.go:65] Caching tarball of preloaded images
	I1208 00:37:06.226692  896760 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:37:06.226679  896760 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:37:06.226706  896760 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:37:06.226817  896760 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:37:06.250884  896760 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:37:06.250894  896760 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:37:06.250908  896760 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:37:06.250945  896760 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:37:06.250999  896760 start.go:364] duration metric: took 38.401µs to acquireMachinesLock for "functional-386544"
	I1208 00:37:06.251017  896760 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:37:06.251022  896760 fix.go:54] fixHost starting: 
	I1208 00:37:06.251283  896760 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:37:06.268900  896760 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:37:06.268920  896760 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:37:06.272102  896760 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:37:06.272127  896760 machine.go:94] provisionDockerMachine start ...
	I1208 00:37:06.272215  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.289500  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.289831  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.289837  896760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:37:06.446749  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.446764  896760 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:37:06.446826  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.466658  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.466960  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.466968  896760 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:37:06.637199  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.637280  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.656923  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.657245  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.657259  896760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:37:06.810893  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:37:06.810908  896760 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:37:06.810925  896760 ubuntu.go:190] setting up certificates
	I1208 00:37:06.810935  896760 provision.go:84] configureAuth start
	I1208 00:37:06.811016  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:06.829686  896760 provision.go:143] copyHostCerts
	I1208 00:37:06.829765  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:37:06.829784  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:37:06.829861  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:37:06.829960  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:37:06.829964  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:37:06.829992  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:37:06.830039  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:37:06.830042  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:37:06.830063  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:37:06.830106  896760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:37:07.178648  896760 provision.go:177] copyRemoteCerts
	I1208 00:37:07.178704  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:37:07.178748  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.196483  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.308383  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:37:07.329033  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:37:07.348621  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:37:07.367658  896760 provision.go:87] duration metric: took 556.701814ms to configureAuth
	I1208 00:37:07.367675  896760 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:37:07.367867  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:07.367872  896760 machine.go:97] duration metric: took 1.095740792s to provisionDockerMachine
	I1208 00:37:07.367878  896760 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:37:07.367889  896760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:37:07.367938  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:37:07.367977  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.392993  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.498867  896760 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:37:07.502617  896760 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:37:07.502635  896760 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:37:07.502647  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:37:07.502710  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:37:07.502786  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:37:07.502867  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:37:07.502912  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:37:07.511139  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:07.530267  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:37:07.549480  896760 start.go:296] duration metric: took 181.586948ms for postStartSetup
	I1208 00:37:07.549558  896760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:37:07.549616  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.567759  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.671740  896760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:37:07.676721  896760 fix.go:56] duration metric: took 1.425689657s for fixHost
	I1208 00:37:07.676741  896760 start.go:83] releasing machines lock for "functional-386544", held for 1.425734498s
	I1208 00:37:07.676811  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:07.694624  896760 ssh_runner.go:195] Run: cat /version.json
	I1208 00:37:07.694669  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.694717  896760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:37:07.694775  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.720790  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.720932  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.911560  896760 ssh_runner.go:195] Run: systemctl --version
	I1208 00:37:07.918241  896760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:37:07.922676  896760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:37:07.922750  896760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:37:07.930831  896760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:37:07.930844  896760 start.go:496] detecting cgroup driver to use...
	I1208 00:37:07.930875  896760 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:37:07.930921  896760 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:37:07.947115  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:37:07.961050  896760 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:37:07.961113  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:37:07.977365  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:37:07.991192  896760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:37:08.126175  896760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:37:08.269608  896760 docker.go:234] disabling docker service ...
	I1208 00:37:08.269664  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:37:08.284945  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:37:08.299108  896760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:37:08.432565  896760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:37:08.555248  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:37:08.569474  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:37:08.585412  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:37:08.595004  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:37:08.604840  896760 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:37:08.604902  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:37:08.613812  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.623203  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:37:08.633142  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.643038  896760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:37:08.652239  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:37:08.661623  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:37:08.671250  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:37:08.680657  896760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:37:08.688616  896760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:37:08.696764  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:08.823042  896760 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:37:08.984184  896760 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:37:08.984277  896760 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:37:08.989088  896760 start.go:564] Will wait 60s for crictl version
	I1208 00:37:08.989158  896760 ssh_runner.go:195] Run: which crictl
	I1208 00:37:08.993493  896760 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:37:09.024246  896760 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:37:09.024323  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.048155  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.074342  896760 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:37:09.077377  896760 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:37:09.094080  896760 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:37:09.101988  896760 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:37:09.104771  896760 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:37:09.104921  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:09.104997  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.131121  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.131133  896760 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:37:09.131193  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.156235  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.156250  896760 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:37:09.156277  896760 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:37:09.156381  896760 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:37:09.156452  896760 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:37:09.182781  896760 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:37:09.182799  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:09.182812  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:09.182826  896760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:37:09.182847  896760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:37:09.182951  896760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:37:09.183025  896760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:37:09.190958  896760 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:37:09.191018  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:37:09.198701  896760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:37:09.211735  896760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:37:09.225024  896760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1208 00:37:09.237969  896760 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:37:09.241818  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:09.362221  896760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:37:09.592794  896760 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:37:09.592805  896760 certs.go:195] generating shared ca certs ...
	I1208 00:37:09.592820  896760 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:37:09.592963  896760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:37:09.593013  896760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:37:09.593019  896760 certs.go:257] generating profile certs ...
	I1208 00:37:09.593102  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:37:09.593154  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:37:09.593193  896760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:37:09.593299  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:37:09.593334  896760 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:37:09.593340  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:37:09.593370  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:37:09.593392  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:37:09.593414  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:37:09.593455  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:09.594053  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:37:09.614864  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:37:09.633613  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:37:09.652858  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:37:09.672208  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:37:09.691703  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:37:09.711394  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:37:09.730947  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:37:09.750211  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:37:09.769149  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:37:09.787710  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:37:09.806312  896760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:37:09.821128  896760 ssh_runner.go:195] Run: openssl version
	I1208 00:37:09.827672  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.835407  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:37:09.843631  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847882  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847954  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.890017  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:37:09.897920  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.905917  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:37:09.913958  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918017  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918088  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.960169  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:37:09.968154  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.975996  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:37:09.984080  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988210  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988283  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:37:10.030981  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:37:10.040434  896760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:37:10.045482  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:37:10.089037  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:37:10.131753  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:37:10.174120  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:37:10.216988  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:37:10.258490  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:37:10.300139  896760 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:10.300218  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:37:10.300290  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.333072  896760 cri.go:89] found id: ""
	I1208 00:37:10.333133  896760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:37:10.342949  896760 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:37:10.342966  896760 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:37:10.343020  896760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:37:10.351917  896760 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.352512  896760 kubeconfig.go:125] found "functional-386544" server: "https://192.168.49.2:8441"
	I1208 00:37:10.356488  896760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:37:10.371422  896760 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:22:35.509962182 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:37:09.232874988 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:37:10.371434  896760 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:37:10.371448  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1208 00:37:10.371510  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.399030  896760 cri.go:89] found id: ""
	I1208 00:37:10.399096  896760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:37:10.416716  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:37:10.425417  896760 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  8 00:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  8 00:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  8 00:26 /etc/kubernetes/scheduler.conf
	
	I1208 00:37:10.425491  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:37:10.433870  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:37:10.441918  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.441981  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:37:10.450104  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.458339  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.458406  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.466222  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:37:10.474083  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.474143  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:37:10.482138  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:37:10.490230  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:10.544026  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.386589  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.605461  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.662330  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.710396  896760 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:37:11.710500  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.210751  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.710625  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.211368  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.710629  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.210663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.710895  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.211137  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.711373  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.211351  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.710569  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.210608  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.710907  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.211191  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.710689  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.210845  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.710623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.211163  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.711542  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.210600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.710988  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.210661  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.710658  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.210891  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.711295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.210648  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.710685  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.211112  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.711299  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.210714  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.710657  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.710683  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.210651  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.711193  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.210592  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.710674  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.211143  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.711278  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.211249  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.711431  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.211577  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.711520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.711607  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.210653  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.711085  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.211213  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.210570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.710652  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.210844  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.710667  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.210595  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.710997  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.210972  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.710639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.211558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.711501  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.211600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.711606  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.211418  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.711303  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.210746  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.710559  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.210639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.710659  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.211497  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.711558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.211385  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.710636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.210636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.710883  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.211287  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.210917  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.710809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.210623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.710673  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.210672  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.210578  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.710671  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.210617  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.711226  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.211295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.711314  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.211406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.711446  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.211464  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.710703  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.211414  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.711319  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.210772  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.710561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.211386  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.710908  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.211262  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.710640  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.211590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.710555  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.211517  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.711490  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.211619  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.710621  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.710668  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.211521  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.711350  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.211355  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.711224  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.211378  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.710638  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.210954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.710606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:11.710708  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:11.736420  896760 cri.go:89] found id: ""
	I1208 00:38:11.736434  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.736442  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:11.736447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:11.736514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:11.760217  896760 cri.go:89] found id: ""
	I1208 00:38:11.760231  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.760238  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:11.760243  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:11.760300  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:11.784869  896760 cri.go:89] found id: ""
	I1208 00:38:11.784882  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.784895  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:11.784900  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:11.784963  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:11.809329  896760 cri.go:89] found id: ""
	I1208 00:38:11.809345  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.809352  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:11.809357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:11.809412  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:11.836937  896760 cri.go:89] found id: ""
	I1208 00:38:11.836951  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.836958  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:11.836964  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:11.837022  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:11.861979  896760 cri.go:89] found id: ""
	I1208 00:38:11.861993  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.862000  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:11.862006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:11.862067  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:11.891173  896760 cri.go:89] found id: ""
	I1208 00:38:11.891187  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.891194  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:11.891202  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:11.891213  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:11.958401  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:11.958411  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:11.958422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:12.022654  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:12.022674  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:12.054077  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:12.054093  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:12.115415  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:12.115439  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.631602  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:14.646925  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:14.646987  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:14.674503  896760 cri.go:89] found id: ""
	I1208 00:38:14.674517  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.674524  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:14.674529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:14.674593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:14.699396  896760 cri.go:89] found id: ""
	I1208 00:38:14.699419  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.699426  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:14.699432  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:14.699503  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:14.724021  896760 cri.go:89] found id: ""
	I1208 00:38:14.724034  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.724042  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:14.724047  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:14.724106  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:14.753658  896760 cri.go:89] found id: ""
	I1208 00:38:14.753672  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.753679  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:14.753684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:14.753749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:14.781621  896760 cri.go:89] found id: ""
	I1208 00:38:14.781635  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.781643  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:14.781649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:14.781707  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:14.807494  896760 cri.go:89] found id: ""
	I1208 00:38:14.807509  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.807516  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:14.807521  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:14.807593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:14.833097  896760 cri.go:89] found id: ""
	I1208 00:38:14.833112  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.833119  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:14.833126  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:14.833136  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:14.889095  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:14.889114  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.903785  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:14.903800  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:14.971093  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:14.971115  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:14.971126  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:15.034725  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:15.034748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:17.575615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:17.586181  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:17.586244  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:17.614489  896760 cri.go:89] found id: ""
	I1208 00:38:17.614503  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.614510  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:17.614516  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:17.614591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:17.646217  896760 cri.go:89] found id: ""
	I1208 00:38:17.646238  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.646245  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:17.646250  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:17.646320  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:17.672670  896760 cri.go:89] found id: ""
	I1208 00:38:17.672684  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.672699  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:17.672705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:17.672771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:17.697872  896760 cri.go:89] found id: ""
	I1208 00:38:17.697886  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.697894  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:17.697899  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:17.697960  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:17.723061  896760 cri.go:89] found id: ""
	I1208 00:38:17.723075  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.723083  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:17.723088  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:17.723148  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:17.751201  896760 cri.go:89] found id: ""
	I1208 00:38:17.751215  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.751257  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:17.751263  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:17.751327  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:17.776877  896760 cri.go:89] found id: ""
	I1208 00:38:17.776898  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.776906  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:17.776914  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:17.776924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:17.833629  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:17.833648  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:17.848545  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:17.848562  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:17.916466  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:17.916477  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:17.916488  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:17.977728  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:17.977748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:20.518003  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:20.528606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:20.528668  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:20.553281  896760 cri.go:89] found id: ""
	I1208 00:38:20.553294  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.553301  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:20.553307  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:20.553362  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:20.578221  896760 cri.go:89] found id: ""
	I1208 00:38:20.578241  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.578249  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:20.578254  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:20.578315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:20.615636  896760 cri.go:89] found id: ""
	I1208 00:38:20.615650  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.615657  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:20.615662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:20.615717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:20.658083  896760 cri.go:89] found id: ""
	I1208 00:38:20.658097  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.658104  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:20.658109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:20.658167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:20.683361  896760 cri.go:89] found id: ""
	I1208 00:38:20.683375  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.683382  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:20.683387  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:20.683445  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:20.708740  896760 cri.go:89] found id: ""
	I1208 00:38:20.708754  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.708761  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:20.708767  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:20.708830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:20.733148  896760 cri.go:89] found id: ""
	I1208 00:38:20.733162  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.733169  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:20.733177  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:20.733187  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:20.789345  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:20.789364  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:20.804329  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:20.804344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:20.869258  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:20.869270  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:20.869280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:20.935198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:20.935220  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:23.463419  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:23.473440  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:23.473514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:23.498380  896760 cri.go:89] found id: ""
	I1208 00:38:23.498395  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.498402  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:23.498407  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:23.498504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:23.524663  896760 cri.go:89] found id: ""
	I1208 00:38:23.524677  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.524683  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:23.524689  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:23.524749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:23.554276  896760 cri.go:89] found id: ""
	I1208 00:38:23.554300  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.554308  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:23.554314  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:23.554373  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:23.581295  896760 cri.go:89] found id: ""
	I1208 00:38:23.581310  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.581317  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:23.581322  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:23.581394  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:23.609485  896760 cri.go:89] found id: ""
	I1208 00:38:23.609499  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.609506  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:23.609512  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:23.609568  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:23.639329  896760 cri.go:89] found id: ""
	I1208 00:38:23.639343  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.639350  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:23.639356  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:23.639415  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:23.666775  896760 cri.go:89] found id: ""
	I1208 00:38:23.666789  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.666796  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:23.666804  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:23.666816  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:23.726052  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:23.726071  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:23.741283  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:23.741300  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:23.814882  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:23.814894  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:23.814918  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:23.882172  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:23.882191  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.416809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:26.427382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:26.427441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:26.455816  896760 cri.go:89] found id: ""
	I1208 00:38:26.455831  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.455838  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:26.455843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:26.455901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:26.481460  896760 cri.go:89] found id: ""
	I1208 00:38:26.481475  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.481482  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:26.481487  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:26.481552  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:26.511736  896760 cri.go:89] found id: ""
	I1208 00:38:26.511750  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.511757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:26.511764  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:26.511824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:26.538164  896760 cri.go:89] found id: ""
	I1208 00:38:26.538185  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.538192  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:26.538197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:26.538263  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:26.564400  896760 cri.go:89] found id: ""
	I1208 00:38:26.564415  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.564423  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:26.564428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:26.564499  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:26.592652  896760 cri.go:89] found id: ""
	I1208 00:38:26.592666  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.592684  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:26.592690  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:26.592756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:26.628887  896760 cri.go:89] found id: ""
	I1208 00:38:26.628913  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.628920  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:26.628928  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:26.628939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:26.645510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:26.645526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:26.715196  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:26.715212  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:26.715223  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:26.776374  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:26.776415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.805091  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:26.805108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.367761  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:29.378770  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:29.378841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:29.407904  896760 cri.go:89] found id: ""
	I1208 00:38:29.407918  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.407925  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:29.407937  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:29.407996  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:29.439249  896760 cri.go:89] found id: ""
	I1208 00:38:29.439263  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.439270  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:29.439275  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:29.439335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:29.464738  896760 cri.go:89] found id: ""
	I1208 00:38:29.464752  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.464760  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:29.464765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:29.464821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:29.491063  896760 cri.go:89] found id: ""
	I1208 00:38:29.491077  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.491085  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:29.491094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:29.491170  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:29.516981  896760 cri.go:89] found id: ""
	I1208 00:38:29.516995  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.517003  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:29.517008  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:29.517068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:29.542623  896760 cri.go:89] found id: ""
	I1208 00:38:29.542637  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.542644  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:29.542649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:29.542706  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:29.568339  896760 cri.go:89] found id: ""
	I1208 00:38:29.568354  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.568361  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:29.568368  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:29.568377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.628127  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:29.628145  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:29.643477  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:29.643493  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:29.719175  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:29.719187  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:29.719198  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:29.782292  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:29.782317  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.310785  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:32.321344  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:32.321408  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:32.347142  896760 cri.go:89] found id: ""
	I1208 00:38:32.347156  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.347163  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:32.347184  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:32.347243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:32.372733  896760 cri.go:89] found id: ""
	I1208 00:38:32.372748  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.372784  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:32.372789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:32.372848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:32.397366  896760 cri.go:89] found id: ""
	I1208 00:38:32.397381  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.397388  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:32.397394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:32.397458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:32.422998  896760 cri.go:89] found id: ""
	I1208 00:38:32.423012  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.423019  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:32.423025  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:32.423092  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:32.454075  896760 cri.go:89] found id: ""
	I1208 00:38:32.454089  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.454096  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:32.454102  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:32.454163  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:32.480907  896760 cri.go:89] found id: ""
	I1208 00:38:32.480931  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.480938  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:32.480945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:32.481033  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:32.508537  896760 cri.go:89] found id: ""
	I1208 00:38:32.508551  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.508559  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:32.508567  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:32.508577  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.536959  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:32.536977  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:32.594663  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:32.594683  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:32.611007  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:32.611023  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:32.685259  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:32.685271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:32.685293  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.252296  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.262679  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:35.262743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:35.288362  896760 cri.go:89] found id: ""
	I1208 00:38:35.288376  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.288384  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:35.288389  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:35.288459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:35.316681  896760 cri.go:89] found id: ""
	I1208 00:38:35.316694  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.316702  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:35.316708  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:35.316771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:35.341646  896760 cri.go:89] found id: ""
	I1208 00:38:35.341661  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.341668  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:35.341673  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:35.341737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:35.367257  896760 cri.go:89] found id: ""
	I1208 00:38:35.367271  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.367278  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:35.367284  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:35.367343  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:35.391511  896760 cri.go:89] found id: ""
	I1208 00:38:35.391526  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.391533  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:35.391538  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:35.391607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:35.416046  896760 cri.go:89] found id: ""
	I1208 00:38:35.416059  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.416067  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:35.416073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:35.416186  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:35.441892  896760 cri.go:89] found id: ""
	I1208 00:38:35.441906  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.441913  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:35.441921  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:35.441930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:35.498141  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:35.498159  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:35.513190  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:35.513206  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:35.577909  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:35.577920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:35.577930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.650521  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:35.650540  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:38.186415  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.196707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:38.196765  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:38.224642  896760 cri.go:89] found id: ""
	I1208 00:38:38.224656  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.224662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:38.224667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:38.224727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:38.250371  896760 cri.go:89] found id: ""
	I1208 00:38:38.250385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.250393  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:38.250397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:38.250490  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:38.275798  896760 cri.go:89] found id: ""
	I1208 00:38:38.275813  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.275820  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:38.275825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:38.275889  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:38.301371  896760 cri.go:89] found id: ""
	I1208 00:38:38.301385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.301393  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:38.301398  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:38.301458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:38.326436  896760 cri.go:89] found id: ""
	I1208 00:38:38.326475  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.326483  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:38.326489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:38.326548  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:38.352684  896760 cri.go:89] found id: ""
	I1208 00:38:38.352698  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.352705  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:38.352711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:38.352770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:38.377358  896760 cri.go:89] found id: ""
	I1208 00:38:38.377372  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.377379  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:38.377424  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:38.377434  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:38.433300  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:38.433319  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:38.448010  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:38.448031  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:38.509419  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:38.509429  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:38.509441  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:38.573641  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:38.573660  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:41.124146  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.134622  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:41.134687  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:41.158822  896760 cri.go:89] found id: ""
	I1208 00:38:41.158837  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.158844  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:41.158850  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:41.158907  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:41.183538  896760 cri.go:89] found id: ""
	I1208 00:38:41.183552  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.183559  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:41.183564  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:41.183621  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:41.211762  896760 cri.go:89] found id: ""
	I1208 00:38:41.211776  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.211783  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:41.211789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:41.211846  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:41.237660  896760 cri.go:89] found id: ""
	I1208 00:38:41.237674  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.237681  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:41.237687  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:41.237746  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:41.263629  896760 cri.go:89] found id: ""
	I1208 00:38:41.263644  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.263651  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:41.263656  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:41.263715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:41.289465  896760 cri.go:89] found id: ""
	I1208 00:38:41.289479  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.289486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:41.289498  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:41.289559  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:41.316932  896760 cri.go:89] found id: ""
	I1208 00:38:41.316948  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.316955  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:41.316963  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:41.316974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:41.380746  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:41.380766  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:41.395918  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:41.395934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:41.460910  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:41.460920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:41.460932  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:41.524405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:41.524433  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.057087  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.067409  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:44.067469  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:44.092978  896760 cri.go:89] found id: ""
	I1208 00:38:44.092992  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.093000  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:44.093005  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:44.093063  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:44.118425  896760 cri.go:89] found id: ""
	I1208 00:38:44.118439  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.118468  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:44.118473  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:44.118537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:44.147582  896760 cri.go:89] found id: ""
	I1208 00:38:44.147597  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.147605  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:44.147610  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:44.147672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:44.173039  896760 cri.go:89] found id: ""
	I1208 00:38:44.173052  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.173060  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:44.173066  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:44.173122  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:44.200035  896760 cri.go:89] found id: ""
	I1208 00:38:44.200048  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.200056  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:44.200064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:44.200124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:44.228628  896760 cri.go:89] found id: ""
	I1208 00:38:44.228643  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.228652  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:44.228658  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:44.228723  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:44.253637  896760 cri.go:89] found id: ""
	I1208 00:38:44.253651  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.253658  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:44.253666  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:44.253678  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.285985  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:44.286001  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:44.342819  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:44.342837  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:44.357562  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:44.357578  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:44.424802  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:44.424813  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:44.424823  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:46.987663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.998722  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:46.998782  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:47.028919  896760 cri.go:89] found id: ""
	I1208 00:38:47.028933  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.028941  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:47.028947  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:47.029019  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:47.054503  896760 cri.go:89] found id: ""
	I1208 00:38:47.054517  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.054524  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:47.054529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:47.054591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:47.080198  896760 cri.go:89] found id: ""
	I1208 00:38:47.080213  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.080220  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:47.080226  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:47.080295  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:47.109584  896760 cri.go:89] found id: ""
	I1208 00:38:47.109600  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.109615  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:47.109621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:47.109705  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:47.140105  896760 cri.go:89] found id: ""
	I1208 00:38:47.140121  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.140128  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:47.140134  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:47.140194  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:47.170105  896760 cri.go:89] found id: ""
	I1208 00:38:47.170119  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.170126  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:47.170131  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:47.170192  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:47.194381  896760 cri.go:89] found id: ""
	I1208 00:38:47.194396  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.194403  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:47.194411  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:47.194421  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:47.250853  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:47.250872  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:47.265858  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:47.265878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:47.337098  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:47.337113  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:47.337129  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:47.400033  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:47.400053  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:49.930600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.941210  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:49.941272  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:49.965928  896760 cri.go:89] found id: ""
	I1208 00:38:49.965942  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.965949  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:49.965954  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:49.966013  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:49.991571  896760 cri.go:89] found id: ""
	I1208 00:38:49.991585  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.991592  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:49.991597  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:49.991661  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:50.031199  896760 cri.go:89] found id: ""
	I1208 00:38:50.031218  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.031226  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:50.031233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:50.031308  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:50.058807  896760 cri.go:89] found id: ""
	I1208 00:38:50.058822  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.058830  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:50.058836  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:50.058898  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:50.089259  896760 cri.go:89] found id: ""
	I1208 00:38:50.089273  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.089281  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:50.089287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:50.089360  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:50.115363  896760 cri.go:89] found id: ""
	I1208 00:38:50.115377  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.115385  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:50.115391  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:50.115454  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:50.144975  896760 cri.go:89] found id: ""
	I1208 00:38:50.144990  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.144998  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:50.145006  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:50.145020  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:50.160213  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:50.160230  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:50.226659  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:50.226669  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:50.226681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:50.288844  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:50.288865  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:50.321807  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:50.321824  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:52.878758  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.892078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:52.892141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:52.919955  896760 cri.go:89] found id: ""
	I1208 00:38:52.919969  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.919977  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:52.919982  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:52.920041  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:52.946242  896760 cri.go:89] found id: ""
	I1208 00:38:52.946256  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.946264  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:52.946269  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:52.946331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:52.976452  896760 cri.go:89] found id: ""
	I1208 00:38:52.976467  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.976475  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:52.976480  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:52.976542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:53.005608  896760 cri.go:89] found id: ""
	I1208 00:38:53.005635  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.005644  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:53.005652  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:53.005729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:53.033758  896760 cri.go:89] found id: ""
	I1208 00:38:53.033773  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.033784  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:53.033789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:53.033848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:53.063554  896760 cri.go:89] found id: ""
	I1208 00:38:53.063568  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.063575  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:53.063581  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:53.063644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:53.093217  896760 cri.go:89] found id: ""
	I1208 00:38:53.093233  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.093241  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:53.093249  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:53.093260  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:53.152571  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:53.152591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:53.167769  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:53.167785  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:53.232572  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:53.232583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:53.232604  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:53.301625  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:53.301653  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:55.831231  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.843576  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:55.843680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:55.878177  896760 cri.go:89] found id: ""
	I1208 00:38:55.878191  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.878198  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:55.878203  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:55.878260  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:55.904640  896760 cri.go:89] found id: ""
	I1208 00:38:55.904660  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.904667  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:55.904672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:55.904729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:55.930143  896760 cri.go:89] found id: ""
	I1208 00:38:55.930156  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.930163  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:55.930168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:55.930223  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:55.954696  896760 cri.go:89] found id: ""
	I1208 00:38:55.954710  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.954717  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:55.954723  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:55.954779  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:55.979424  896760 cri.go:89] found id: ""
	I1208 00:38:55.979438  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.979445  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:55.979453  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:55.979513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:56.010864  896760 cri.go:89] found id: ""
	I1208 00:38:56.010879  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.010887  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:56.010893  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:56.010959  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:56.038141  896760 cri.go:89] found id: ""
	I1208 00:38:56.038155  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.038163  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:56.038171  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:56.038183  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:56.105328  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:56.105339  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:56.105350  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:56.167859  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:56.167878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:56.195618  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:56.195634  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:56.254386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:56.254406  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:58.770585  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.780949  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:58.781010  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:58.804624  896760 cri.go:89] found id: ""
	I1208 00:38:58.804638  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.804645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:58.804651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:58.804710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:58.830257  896760 cri.go:89] found id: ""
	I1208 00:38:58.830271  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.830278  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:58.830283  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:58.830341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:58.870359  896760 cri.go:89] found id: ""
	I1208 00:38:58.870383  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.870390  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:58.870396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:58.870501  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:58.897347  896760 cri.go:89] found id: ""
	I1208 00:38:58.897361  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.897368  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:58.897373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:58.897431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:58.927474  896760 cri.go:89] found id: ""
	I1208 00:38:58.927488  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.927496  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:58.927501  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:58.927563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:58.953358  896760 cri.go:89] found id: ""
	I1208 00:38:58.953372  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.953380  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:58.953386  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:58.953443  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:58.978092  896760 cri.go:89] found id: ""
	I1208 00:38:58.978107  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.978116  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:58.978124  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:58.978134  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:59.008505  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:59.008524  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:59.067065  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:59.067095  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:59.081827  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:59.081843  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:59.148151  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:59.148161  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:59.148172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:01.713848  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.724264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:01.724326  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:01.752237  896760 cri.go:89] found id: ""
	I1208 00:39:01.752251  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.752258  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:01.752264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:01.752325  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:01.778116  896760 cri.go:89] found id: ""
	I1208 00:39:01.778129  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.778136  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:01.778141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:01.778213  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:01.807711  896760 cri.go:89] found id: ""
	I1208 00:39:01.807725  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.807731  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:01.807737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:01.807798  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:01.836797  896760 cri.go:89] found id: ""
	I1208 00:39:01.836812  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.836820  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:01.836826  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:01.836884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:01.863221  896760 cri.go:89] found id: ""
	I1208 00:39:01.863235  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.863242  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:01.863247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:01.863307  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:01.902460  896760 cri.go:89] found id: ""
	I1208 00:39:01.902476  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.902483  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:01.902489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:01.902558  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:01.930861  896760 cri.go:89] found id: ""
	I1208 00:39:01.930874  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.930882  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:01.930889  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:01.930900  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:01.987172  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:01.987190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:02.006975  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:02.006993  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:02.075975  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:02.076005  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:02.076017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:02.142423  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:02.142453  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:04.675643  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.688662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:04.688743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:04.716050  896760 cri.go:89] found id: ""
	I1208 00:39:04.716065  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.716072  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:04.716078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:04.716141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:04.742668  896760 cri.go:89] found id: ""
	I1208 00:39:04.742682  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.742690  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:04.742695  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:04.742756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:04.769375  896760 cri.go:89] found id: ""
	I1208 00:39:04.769388  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.769396  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:04.769401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:04.769459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:04.795270  896760 cri.go:89] found id: ""
	I1208 00:39:04.795284  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.795291  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:04.795297  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:04.795354  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:04.822245  896760 cri.go:89] found id: ""
	I1208 00:39:04.822258  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.822265  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:04.822271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:04.822330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:04.859401  896760 cri.go:89] found id: ""
	I1208 00:39:04.859414  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.859422  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:04.859428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:04.859486  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:04.896707  896760 cri.go:89] found id: ""
	I1208 00:39:04.896721  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.896728  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:04.896736  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:04.896745  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:04.967586  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:04.967603  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:04.983057  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:04.983080  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:05.060799  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:05.060821  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:05.060832  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:05.123856  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:05.123875  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:07.653529  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.664109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:07.664168  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:07.689362  896760 cri.go:89] found id: ""
	I1208 00:39:07.689376  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.689383  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:07.689388  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:07.689448  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:07.714707  896760 cri.go:89] found id: ""
	I1208 00:39:07.714722  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.714729  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:07.714734  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:07.714792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:07.740750  896760 cri.go:89] found id: ""
	I1208 00:39:07.740765  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.740771  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:07.740777  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:07.740834  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:07.765622  896760 cri.go:89] found id: ""
	I1208 00:39:07.765637  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.765645  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:07.765650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:07.765714  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:07.790729  896760 cri.go:89] found id: ""
	I1208 00:39:07.790744  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.790751  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:07.790756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:07.790824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:07.821100  896760 cri.go:89] found id: ""
	I1208 00:39:07.821114  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.821122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:07.821127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:07.821185  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:07.855011  896760 cri.go:89] found id: ""
	I1208 00:39:07.855025  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.855042  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:07.855050  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:07.855061  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:07.916163  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:07.916184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:07.931656  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:07.931672  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:08.007997  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:08.008026  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:08.008039  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:08.079922  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:08.079944  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:10.614429  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.625953  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:10.626015  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:10.651687  896760 cri.go:89] found id: ""
	I1208 00:39:10.651701  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.651708  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:10.651714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:10.651774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:10.676412  896760 cri.go:89] found id: ""
	I1208 00:39:10.676426  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.676433  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:10.676439  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:10.676507  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:10.705971  896760 cri.go:89] found id: ""
	I1208 00:39:10.705986  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.705992  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:10.705998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:10.706058  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:10.730598  896760 cri.go:89] found id: ""
	I1208 00:39:10.730621  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.730629  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:10.730634  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:10.730695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:10.756672  896760 cri.go:89] found id: ""
	I1208 00:39:10.756694  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.756702  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:10.756707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:10.756770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:10.786647  896760 cri.go:89] found id: ""
	I1208 00:39:10.786671  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.786679  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:10.786685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:10.786753  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:10.813024  896760 cri.go:89] found id: ""
	I1208 00:39:10.813037  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.813045  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:10.813063  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:10.813074  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:10.870687  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:10.870718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:10.887434  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:10.887451  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:10.955043  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:10.955053  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:10.955064  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:11.016735  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:11.016756  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:13.547727  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.558158  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:13.558216  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:13.583025  896760 cri.go:89] found id: ""
	I1208 00:39:13.583045  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.583053  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:13.583058  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:13.583119  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:13.608731  896760 cri.go:89] found id: ""
	I1208 00:39:13.608744  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.608751  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:13.608756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:13.608815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:13.634817  896760 cri.go:89] found id: ""
	I1208 00:39:13.634831  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.634838  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:13.634843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:13.634905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:13.659255  896760 cri.go:89] found id: ""
	I1208 00:39:13.659269  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.659276  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:13.659281  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:13.659341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:13.683853  896760 cri.go:89] found id: ""
	I1208 00:39:13.683867  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.683882  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:13.683888  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:13.683949  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:13.708780  896760 cri.go:89] found id: ""
	I1208 00:39:13.708795  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.708802  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:13.708807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:13.708864  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:13.734678  896760 cri.go:89] found id: ""
	I1208 00:39:13.734692  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.734699  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:13.734708  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:13.734718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:13.790576  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:13.790597  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:13.805551  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:13.805567  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:13.884689  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:13.884710  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:13.884721  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:13.954356  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:13.954379  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:16.485706  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.496517  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:16.496577  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:16.526348  896760 cri.go:89] found id: ""
	I1208 00:39:16.526363  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.526370  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:16.526376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:16.526459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:16.551935  896760 cri.go:89] found id: ""
	I1208 00:39:16.551949  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.551962  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:16.551968  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:16.552028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:16.576320  896760 cri.go:89] found id: ""
	I1208 00:39:16.576333  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.576340  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:16.576345  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:16.576403  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:16.605756  896760 cri.go:89] found id: ""
	I1208 00:39:16.605770  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.605777  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:16.605783  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:16.605839  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:16.632121  896760 cri.go:89] found id: ""
	I1208 00:39:16.632134  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.632141  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:16.632146  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:16.632203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:16.660423  896760 cri.go:89] found id: ""
	I1208 00:39:16.660437  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.660444  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:16.660450  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:16.660531  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:16.685576  896760 cri.go:89] found id: ""
	I1208 00:39:16.685595  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.685602  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:16.685610  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:16.685620  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:16.740694  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:16.740712  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:16.755790  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:16.755806  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:16.821132  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:16.821152  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:16.821164  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:16.887057  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:16.887076  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:19.418598  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.428681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:19.428748  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:19.455940  896760 cri.go:89] found id: ""
	I1208 00:39:19.455953  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.455961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:19.455966  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:19.456027  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:19.482046  896760 cri.go:89] found id: ""
	I1208 00:39:19.482060  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.482067  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:19.482073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:19.482130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:19.510706  896760 cri.go:89] found id: ""
	I1208 00:39:19.510720  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.510728  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:19.510733  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:19.510792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:19.535505  896760 cri.go:89] found id: ""
	I1208 00:39:19.535520  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.535528  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:19.535533  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:19.535601  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:19.560234  896760 cri.go:89] found id: ""
	I1208 00:39:19.560248  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.560255  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:19.560261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:19.560328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:19.584606  896760 cri.go:89] found id: ""
	I1208 00:39:19.584621  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.584629  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:19.584637  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:19.584695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:19.613195  896760 cri.go:89] found id: ""
	I1208 00:39:19.613226  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.613234  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:19.613242  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:19.613252  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:19.670165  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:19.670184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:19.685327  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:19.685351  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:19.749894  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:19.749914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:19.749928  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:19.812758  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:19.812779  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:22.352520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.362719  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:22.362790  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:22.387649  896760 cri.go:89] found id: ""
	I1208 00:39:22.387662  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.387669  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:22.387675  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:22.387734  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:22.416444  896760 cri.go:89] found id: ""
	I1208 00:39:22.416458  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.416465  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:22.416470  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:22.416538  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:22.442291  896760 cri.go:89] found id: ""
	I1208 00:39:22.442305  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.442312  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:22.442317  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:22.442377  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:22.466919  896760 cri.go:89] found id: ""
	I1208 00:39:22.466933  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.466940  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:22.466945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:22.467011  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:22.492435  896760 cri.go:89] found id: ""
	I1208 00:39:22.492449  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.492456  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:22.492461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:22.492526  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:22.518157  896760 cri.go:89] found id: ""
	I1208 00:39:22.518183  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.518190  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:22.518197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:22.518266  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:22.544341  896760 cri.go:89] found id: ""
	I1208 00:39:22.544356  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.544363  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:22.544371  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:22.544389  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:22.601655  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:22.601676  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:22.617670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:22.617700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:22.686714  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:22.686725  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:22.686736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:22.749600  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:22.749621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.281783  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.292163  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:25.292227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:25.316234  896760 cri.go:89] found id: ""
	I1208 00:39:25.316249  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.316257  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:25.316262  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:25.316330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:25.350433  896760 cri.go:89] found id: ""
	I1208 00:39:25.350478  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.350485  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:25.350491  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:25.350562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:25.376982  896760 cri.go:89] found id: ""
	I1208 00:39:25.376996  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.377004  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:25.377009  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:25.377076  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:25.402484  896760 cri.go:89] found id: ""
	I1208 00:39:25.402499  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.402506  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:25.402511  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:25.402580  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:25.429596  896760 cri.go:89] found id: ""
	I1208 00:39:25.429611  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.429618  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:25.429624  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:25.429692  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:25.455037  896760 cri.go:89] found id: ""
	I1208 00:39:25.455051  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.455059  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:25.455064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:25.455130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:25.484391  896760 cri.go:89] found id: ""
	I1208 00:39:25.484404  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.484412  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:25.484420  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:25.484430  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.512262  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:25.512282  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:25.569524  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:25.569543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:25.584301  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:25.584316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:25.650571  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:25.650583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:25.650594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.218586  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.229069  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:28.229127  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:28.254473  896760 cri.go:89] found id: ""
	I1208 00:39:28.254487  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.254494  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:28.254499  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:28.254563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:28.283388  896760 cri.go:89] found id: ""
	I1208 00:39:28.283403  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.283410  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:28.283418  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:28.283475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:28.310968  896760 cri.go:89] found id: ""
	I1208 00:39:28.310983  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.310990  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:28.310995  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:28.311061  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:28.336049  896760 cri.go:89] found id: ""
	I1208 00:39:28.336064  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.336072  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:28.336078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:28.336141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:28.360451  896760 cri.go:89] found id: ""
	I1208 00:39:28.360464  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.360470  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:28.360475  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:28.360542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:28.385117  896760 cri.go:89] found id: ""
	I1208 00:39:28.385131  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.385138  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:28.385143  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:28.385196  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:28.408915  896760 cri.go:89] found id: ""
	I1208 00:39:28.408928  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.408935  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:28.408943  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:28.408953  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:28.423316  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:28.423332  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:28.486812  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:28.486823  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:28.486833  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.553325  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:28.553344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:28.582011  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:28.582027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.143204  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.154196  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:31.154264  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:31.181624  896760 cri.go:89] found id: ""
	I1208 00:39:31.181638  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.181645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:31.181650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:31.181713  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:31.207658  896760 cri.go:89] found id: ""
	I1208 00:39:31.207672  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.207679  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:31.207684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:31.207742  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:31.233323  896760 cri.go:89] found id: ""
	I1208 00:39:31.233338  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.233345  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:31.233351  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:31.233411  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:31.258320  896760 cri.go:89] found id: ""
	I1208 00:39:31.258335  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.258342  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:31.258347  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:31.258406  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:31.283846  896760 cri.go:89] found id: ""
	I1208 00:39:31.283860  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.283868  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:31.283873  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:31.283931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:31.310064  896760 cri.go:89] found id: ""
	I1208 00:39:31.310079  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.310086  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:31.310091  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:31.310149  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:31.337328  896760 cri.go:89] found id: ""
	I1208 00:39:31.337350  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.337358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:31.337367  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:31.337377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.392950  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:31.392969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:31.407922  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:31.407939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:31.474904  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:31.474915  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:31.474925  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:31.536814  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:31.536834  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:34.069082  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:34.079471  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:34.079532  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:34.121832  896760 cri.go:89] found id: ""
	I1208 00:39:34.121846  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.121853  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:34.121859  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:34.121923  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:34.151527  896760 cri.go:89] found id: ""
	I1208 00:39:34.151541  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.151548  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:34.151553  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:34.151613  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:34.179098  896760 cri.go:89] found id: ""
	I1208 00:39:34.179113  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.179121  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:34.179126  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:34.179184  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:34.209529  896760 cri.go:89] found id: ""
	I1208 00:39:34.209548  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.209563  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:34.209568  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:34.209655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:34.238235  896760 cri.go:89] found id: ""
	I1208 00:39:34.238249  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.238256  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:34.238261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:34.238318  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:34.263739  896760 cri.go:89] found id: ""
	I1208 00:39:34.263752  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.263760  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:34.263765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:34.263838  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:34.293315  896760 cri.go:89] found id: ""
	I1208 00:39:34.293330  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.293337  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:34.293345  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:34.293356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:34.348849  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:34.348873  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:34.363941  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:34.363958  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:34.430475  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:34.430487  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:34.430501  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:34.492396  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:34.492415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.025457  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:37.036130  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:37.036201  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:37.063581  896760 cri.go:89] found id: ""
	I1208 00:39:37.063595  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.063602  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:37.063609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:37.063672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:37.088301  896760 cri.go:89] found id: ""
	I1208 00:39:37.088320  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.088328  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:37.088334  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:37.088395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:37.124388  896760 cri.go:89] found id: ""
	I1208 00:39:37.124402  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.124409  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:37.124417  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:37.124474  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:37.154807  896760 cri.go:89] found id: ""
	I1208 00:39:37.154821  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.154838  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:37.154843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:37.154912  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:37.180191  896760 cri.go:89] found id: ""
	I1208 00:39:37.180204  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.180212  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:37.180217  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:37.180279  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:37.205379  896760 cri.go:89] found id: ""
	I1208 00:39:37.205394  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.205402  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:37.205408  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:37.205487  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:37.233231  896760 cri.go:89] found id: ""
	I1208 00:39:37.233245  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.233264  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:37.233271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:37.233280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:37.297690  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:37.297709  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.325655  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:37.325682  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:37.385822  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:37.385841  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:37.400660  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:37.400685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:37.463113  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:39.963375  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:39.974152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:39.974214  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:40.011461  896760 cri.go:89] found id: ""
	I1208 00:39:40.011477  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.011485  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:40.011492  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:40.011588  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:40.050774  896760 cri.go:89] found id: ""
	I1208 00:39:40.050789  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.050810  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:40.050819  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:40.050895  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:40.078693  896760 cri.go:89] found id: ""
	I1208 00:39:40.078712  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.078737  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:40.078743  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:40.078832  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:40.119774  896760 cri.go:89] found id: ""
	I1208 00:39:40.119787  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.119806  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:40.119812  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:40.119870  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:40.150656  896760 cri.go:89] found id: ""
	I1208 00:39:40.150682  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.150689  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:40.150694  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:40.150761  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:40.182218  896760 cri.go:89] found id: ""
	I1208 00:39:40.182233  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.182247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:40.182253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:40.182329  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:40.212756  896760 cri.go:89] found id: ""
	I1208 00:39:40.212770  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.212778  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:40.212786  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:40.212796  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:40.271111  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:40.271135  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:40.286128  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:40.286144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:40.350612  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:40.350622  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:40.350633  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:40.413198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:40.413217  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:42.941473  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:42.951830  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:42.951896  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:42.977279  896760 cri.go:89] found id: ""
	I1208 00:39:42.977294  896760 logs.go:282] 0 containers: []
	W1208 00:39:42.977303  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:42.977309  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:42.977378  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:43.005862  896760 cri.go:89] found id: ""
	I1208 00:39:43.005878  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.005886  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:43.005891  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:43.006072  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:43.033594  896760 cri.go:89] found id: ""
	I1208 00:39:43.033609  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.033616  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:43.033621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:43.033700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:43.058971  896760 cri.go:89] found id: ""
	I1208 00:39:43.058986  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.058993  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:43.058999  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:43.059056  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:43.084568  896760 cri.go:89] found id: ""
	I1208 00:39:43.084582  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.084590  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:43.084595  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:43.084657  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:43.121795  896760 cri.go:89] found id: ""
	I1208 00:39:43.121810  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.121818  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:43.121823  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:43.121884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:43.151337  896760 cri.go:89] found id: ""
	I1208 00:39:43.151351  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.151358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:43.151365  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:43.151375  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:43.212011  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:43.212032  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:43.227510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:43.227526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:43.293650  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:43.293672  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:43.293684  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:43.355405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:43.355425  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:45.883665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:45.894220  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:45.894287  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:45.919120  896760 cri.go:89] found id: ""
	I1208 00:39:45.919134  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.919141  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:45.919147  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:45.919202  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:45.944078  896760 cri.go:89] found id: ""
	I1208 00:39:45.944092  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.944100  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:45.944105  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:45.944171  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:45.969419  896760 cri.go:89] found id: ""
	I1208 00:39:45.969433  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.969440  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:45.969445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:45.969504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:45.999721  896760 cri.go:89] found id: ""
	I1208 00:39:45.999736  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.999744  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:45.999749  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:45.999807  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:46.027671  896760 cri.go:89] found id: ""
	I1208 00:39:46.027685  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.027697  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:46.027705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:46.027763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:46.053035  896760 cri.go:89] found id: ""
	I1208 00:39:46.053050  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.053058  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:46.053064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:46.053124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:46.077745  896760 cri.go:89] found id: ""
	I1208 00:39:46.077759  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.077767  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:46.077775  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:46.077786  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:46.137068  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:46.137086  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:46.153304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:46.153320  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:46.226313  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:46.226334  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:46.226345  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:46.290116  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:46.290137  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:48.819903  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:48.830265  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:48.830328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:48.855396  896760 cri.go:89] found id: ""
	I1208 00:39:48.855411  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.855418  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:48.855423  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:48.855483  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:48.880269  896760 cri.go:89] found id: ""
	I1208 00:39:48.880282  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.880289  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:48.880294  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:48.880353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:48.904626  896760 cri.go:89] found id: ""
	I1208 00:39:48.904641  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.904648  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:48.904653  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:48.904715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:48.930484  896760 cri.go:89] found id: ""
	I1208 00:39:48.930511  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.930519  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:48.930528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:48.930609  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:48.956159  896760 cri.go:89] found id: ""
	I1208 00:39:48.956173  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.956180  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:48.956185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:48.956243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:48.984643  896760 cri.go:89] found id: ""
	I1208 00:39:48.984657  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.984664  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:48.984670  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:48.984737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:49.012694  896760 cri.go:89] found id: ""
	I1208 00:39:49.012708  896760 logs.go:282] 0 containers: []
	W1208 00:39:49.012716  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:49.012724  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:49.012736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:49.042898  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:49.042915  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:49.099079  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:49.099099  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:49.118877  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:49.118895  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:49.190253  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:49.190263  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:49.190273  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:51.751406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:51.761914  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:51.761973  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:51.788354  896760 cri.go:89] found id: ""
	I1208 00:39:51.788367  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.788375  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:51.788381  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:51.788441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:51.816636  896760 cri.go:89] found id: ""
	I1208 00:39:51.816651  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.816658  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:51.816664  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:51.816735  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:51.842160  896760 cri.go:89] found id: ""
	I1208 00:39:51.842174  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.842181  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:51.842187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:51.842249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:51.867343  896760 cri.go:89] found id: ""
	I1208 00:39:51.867358  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.867365  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:51.867371  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:51.867432  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:51.891589  896760 cri.go:89] found id: ""
	I1208 00:39:51.891604  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.891611  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:51.891616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:51.891681  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:51.915982  896760 cri.go:89] found id: ""
	I1208 00:39:51.915997  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.916016  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:51.916023  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:51.916081  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:51.940386  896760 cri.go:89] found id: ""
	I1208 00:39:51.940399  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.940406  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:51.940414  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:51.940424  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:51.995386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:51.995404  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:52.011670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:52.011689  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:52.085018  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:52.085029  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:52.085041  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:52.155066  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:52.155085  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.698041  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:54.708958  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:54.709024  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:54.734899  896760 cri.go:89] found id: ""
	I1208 00:39:54.734913  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.734921  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:54.734926  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:54.734985  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:54.761966  896760 cri.go:89] found id: ""
	I1208 00:39:54.761981  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.761988  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:54.761993  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:54.762052  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:54.787505  896760 cri.go:89] found id: ""
	I1208 00:39:54.787519  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.787526  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:54.787532  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:54.787595  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:54.813125  896760 cri.go:89] found id: ""
	I1208 00:39:54.813139  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.813147  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:54.813152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:54.813212  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:54.840170  896760 cri.go:89] found id: ""
	I1208 00:39:54.840185  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.840193  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:54.840198  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:54.840269  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:54.865780  896760 cri.go:89] found id: ""
	I1208 00:39:54.865794  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.865801  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:54.865807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:54.865867  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:54.890971  896760 cri.go:89] found id: ""
	I1208 00:39:54.890992  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.891000  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:54.891007  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:54.891017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:54.953695  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:54.953715  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.985753  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:54.985770  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:55.051156  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:55.051176  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:55.066530  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:55.066547  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:55.148075  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:57.649726  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:57.660051  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:57.660109  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:57.685992  896760 cri.go:89] found id: ""
	I1208 00:39:57.686008  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.686015  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:57.686022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:57.686165  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:57.711195  896760 cri.go:89] found id: ""
	I1208 00:39:57.711209  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.711216  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:57.711224  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:57.711285  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:57.735850  896760 cri.go:89] found id: ""
	I1208 00:39:57.735864  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.735871  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:57.735877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:57.735936  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:57.761018  896760 cri.go:89] found id: ""
	I1208 00:39:57.761032  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.761040  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:57.761045  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:57.761110  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:57.787523  896760 cri.go:89] found id: ""
	I1208 00:39:57.787537  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.787544  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:57.787550  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:57.787607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:57.813621  896760 cri.go:89] found id: ""
	I1208 00:39:57.813641  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.813648  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:57.813654  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:57.813717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:57.837687  896760 cri.go:89] found id: ""
	I1208 00:39:57.837700  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.837707  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:57.837715  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:57.837725  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:57.901756  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:57.901780  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:57.931916  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:57.931943  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:57.989769  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:57.989791  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:58.005304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:58.005324  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:58.084868  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:00.590352  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:00.608394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:00.608458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:00.646794  896760 cri.go:89] found id: ""
	I1208 00:40:00.646810  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.646818  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:00.646825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:00.646893  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:00.722151  896760 cri.go:89] found id: ""
	I1208 00:40:00.722167  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.722175  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:00.722180  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:00.722252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:00.751689  896760 cri.go:89] found id: ""
	I1208 00:40:00.751705  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.751713  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:00.751720  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:00.751795  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:00.782551  896760 cri.go:89] found id: ""
	I1208 00:40:00.782577  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.782586  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:00.782593  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:00.782674  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:00.813259  896760 cri.go:89] found id: ""
	I1208 00:40:00.813275  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.813282  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:00.813287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:00.813353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:00.843171  896760 cri.go:89] found id: ""
	I1208 00:40:00.843193  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.843201  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:00.843206  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:00.843270  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:00.872240  896760 cri.go:89] found id: ""
	I1208 00:40:00.872266  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.872275  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:00.872283  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:00.872297  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:00.933096  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:00.933116  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:00.949661  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:00.949685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:01.022088  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:01.022099  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:01.022112  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:01.087987  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:01.088007  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.623088  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:03.637929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:03.637992  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:03.667258  896760 cri.go:89] found id: ""
	I1208 00:40:03.667272  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.667280  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:03.667286  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:03.667347  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:03.704022  896760 cri.go:89] found id: ""
	I1208 00:40:03.704035  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.704042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:03.704048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:03.704115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:03.733401  896760 cri.go:89] found id: ""
	I1208 00:40:03.733416  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.733423  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:03.733428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:03.733489  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:03.760028  896760 cri.go:89] found id: ""
	I1208 00:40:03.760042  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.760049  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:03.760054  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:03.760113  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:03.784849  896760 cri.go:89] found id: ""
	I1208 00:40:03.784864  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.784871  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:03.784877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:03.784934  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:03.809615  896760 cri.go:89] found id: ""
	I1208 00:40:03.809629  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.809636  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:03.809642  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:03.809700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:03.834857  896760 cri.go:89] found id: ""
	I1208 00:40:03.834872  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.834879  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:03.834886  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:03.834896  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:03.899301  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:03.899312  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:03.899330  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:03.961403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:03.961422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.990248  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:03.990265  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:04.049257  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:04.049280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.564731  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:06.575277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:06.575339  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:06.603640  896760 cri.go:89] found id: ""
	I1208 00:40:06.603653  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.603662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:06.603668  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:06.603727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:06.632743  896760 cri.go:89] found id: ""
	I1208 00:40:06.632757  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.632764  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:06.632769  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:06.632830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:06.661586  896760 cri.go:89] found id: ""
	I1208 00:40:06.661600  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.661608  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:06.661613  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:06.661675  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:06.686811  896760 cri.go:89] found id: ""
	I1208 00:40:06.686833  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.686840  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:06.686845  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:06.686905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:06.712624  896760 cri.go:89] found id: ""
	I1208 00:40:06.712639  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.712646  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:06.712651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:06.712710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:06.737865  896760 cri.go:89] found id: ""
	I1208 00:40:06.737878  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.737898  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:06.737903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:06.737971  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:06.763555  896760 cri.go:89] found id: ""
	I1208 00:40:06.763569  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.763576  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:06.763583  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:06.763594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:06.820256  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:06.820275  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.835590  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:06.835606  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:06.900244  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:06.900256  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:06.900269  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:06.964553  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:06.964573  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.497887  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:09.511442  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:09.511513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:09.537551  896760 cri.go:89] found id: ""
	I1208 00:40:09.537566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.537573  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:09.537579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:09.537639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:09.564387  896760 cri.go:89] found id: ""
	I1208 00:40:09.564400  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.564408  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:09.564412  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:09.564471  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:09.592551  896760 cri.go:89] found id: ""
	I1208 00:40:09.592566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.592573  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:09.592579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:09.592638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:09.633536  896760 cri.go:89] found id: ""
	I1208 00:40:09.633553  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.633564  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:09.633572  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:09.633644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:09.661685  896760 cri.go:89] found id: ""
	I1208 00:40:09.661700  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.661706  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:09.661711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:09.661773  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:09.689367  896760 cri.go:89] found id: ""
	I1208 00:40:09.689382  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.689390  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:09.689396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:09.689461  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:09.715001  896760 cri.go:89] found id: ""
	I1208 00:40:09.715025  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.715033  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:09.715041  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:09.715052  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.743922  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:09.743939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:09.801833  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:09.801852  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:09.817182  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:09.817199  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:09.885006  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:09.885017  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:09.885028  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.453176  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:12.463998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:12.464059  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:12.489947  896760 cri.go:89] found id: ""
	I1208 00:40:12.489961  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.489968  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:12.489974  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:12.490053  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:12.517571  896760 cri.go:89] found id: ""
	I1208 00:40:12.517586  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.517594  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:12.517601  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:12.517680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:12.549649  896760 cri.go:89] found id: ""
	I1208 00:40:12.549671  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.549679  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:12.549685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:12.549764  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:12.575870  896760 cri.go:89] found id: ""
	I1208 00:40:12.575891  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.575899  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:12.575903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:12.575975  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:12.615650  896760 cri.go:89] found id: ""
	I1208 00:40:12.615664  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.615672  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:12.615677  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:12.615745  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:12.644432  896760 cri.go:89] found id: ""
	I1208 00:40:12.644446  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.644454  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:12.644460  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:12.644536  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:12.671471  896760 cri.go:89] found id: ""
	I1208 00:40:12.671485  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.671492  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:12.671499  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:12.671510  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:12.728175  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:12.728195  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:12.743959  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:12.743975  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:12.816570  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:12.816580  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:12.816591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.879403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:12.879423  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:15.414366  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:15.424841  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:15.424901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:15.450991  896760 cri.go:89] found id: ""
	I1208 00:40:15.451005  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.451012  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:15.451017  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:15.451078  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:15.477340  896760 cri.go:89] found id: ""
	I1208 00:40:15.477354  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.477361  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:15.477366  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:15.477424  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:15.503035  896760 cri.go:89] found id: ""
	I1208 00:40:15.503048  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.503055  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:15.503060  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:15.503125  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:15.527771  896760 cri.go:89] found id: ""
	I1208 00:40:15.527787  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.527794  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:15.527798  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:15.527856  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:15.556600  896760 cri.go:89] found id: ""
	I1208 00:40:15.556627  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.556634  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:15.556639  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:15.556710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:15.582706  896760 cri.go:89] found id: ""
	I1208 00:40:15.582721  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.582728  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:15.582737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:15.582821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:15.628093  896760 cri.go:89] found id: ""
	I1208 00:40:15.628114  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.628121  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:15.628129  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:15.628144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:15.691996  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:15.692026  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:15.707812  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:15.707830  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:15.773396  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:15.773407  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:15.773418  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:15.840937  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:15.840957  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:18.375079  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:18.385866  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:18.385931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:18.412581  896760 cri.go:89] found id: ""
	I1208 00:40:18.412596  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.412603  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:18.412609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:18.412672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:18.443837  896760 cri.go:89] found id: ""
	I1208 00:40:18.443863  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.443871  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:18.443876  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:18.443950  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:18.470522  896760 cri.go:89] found id: ""
	I1208 00:40:18.470549  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.470557  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:18.470565  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:18.470639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:18.500112  896760 cri.go:89] found id: ""
	I1208 00:40:18.500127  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.500136  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:18.500141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:18.500203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:18.528643  896760 cri.go:89] found id: ""
	I1208 00:40:18.528657  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.528666  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:18.528672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:18.528740  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:18.556708  896760 cri.go:89] found id: ""
	I1208 00:40:18.556722  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.556729  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:18.556735  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:18.556799  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:18.586255  896760 cri.go:89] found id: ""
	I1208 00:40:18.586270  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.586277  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:18.586285  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:18.586295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:18.651954  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:18.651974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:18.668271  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:18.668288  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:18.735458  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:18.735469  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:18.735481  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:18.797791  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:18.797811  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.328343  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:21.339006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:21.339068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:21.365940  896760 cri.go:89] found id: ""
	I1208 00:40:21.365954  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.365961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:21.365967  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:21.366028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:21.393056  896760 cri.go:89] found id: ""
	I1208 00:40:21.393071  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.393078  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:21.393083  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:21.393147  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:21.418602  896760 cri.go:89] found id: ""
	I1208 00:40:21.418616  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.418624  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:21.418630  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:21.418689  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:21.444947  896760 cri.go:89] found id: ""
	I1208 00:40:21.444963  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.444970  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:21.444976  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:21.445037  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:21.486428  896760 cri.go:89] found id: ""
	I1208 00:40:21.486461  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.486469  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:21.486476  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:21.486537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:21.516432  896760 cri.go:89] found id: ""
	I1208 00:40:21.516448  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.516455  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:21.516461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:21.516527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:21.542473  896760 cri.go:89] found id: ""
	I1208 00:40:21.542488  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.542501  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:21.542510  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:21.542521  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:21.558088  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:21.558105  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:21.646839  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:21.646850  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:21.646861  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:21.711182  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:21.711203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.739373  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:21.739391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.296477  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:24.307018  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:24.307079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:24.333480  896760 cri.go:89] found id: ""
	I1208 00:40:24.333502  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.333521  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:24.333526  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:24.333587  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:24.359023  896760 cri.go:89] found id: ""
	I1208 00:40:24.359037  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.359044  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:24.359049  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:24.359118  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:24.384337  896760 cri.go:89] found id: ""
	I1208 00:40:24.384351  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.384358  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:24.384363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:24.384425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:24.409687  896760 cri.go:89] found id: ""
	I1208 00:40:24.409702  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.409709  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:24.409714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:24.409774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:24.434605  896760 cri.go:89] found id: ""
	I1208 00:40:24.434620  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.434627  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:24.434633  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:24.434690  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:24.464542  896760 cri.go:89] found id: ""
	I1208 00:40:24.464556  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.464569  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:24.464575  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:24.464638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:24.489131  896760 cri.go:89] found id: ""
	I1208 00:40:24.489145  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.489152  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:24.489159  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:24.489170  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.544278  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:24.544298  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:24.560095  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:24.560152  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:24.637902  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:24.637914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:24.637924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:24.706243  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:24.706262  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.237246  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:27.247681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:27.247744  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:27.272827  896760 cri.go:89] found id: ""
	I1208 00:40:27.272841  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.272848  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:27.272854  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:27.272917  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:27.298021  896760 cri.go:89] found id: ""
	I1208 00:40:27.298035  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.298042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:27.298048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:27.298115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:27.322943  896760 cri.go:89] found id: ""
	I1208 00:40:27.322975  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.322983  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:27.322989  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:27.323049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:27.348507  896760 cri.go:89] found id: ""
	I1208 00:40:27.348522  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.348530  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:27.348535  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:27.348604  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:27.373824  896760 cri.go:89] found id: ""
	I1208 00:40:27.373838  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.373846  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:27.373851  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:27.373911  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:27.399388  896760 cri.go:89] found id: ""
	I1208 00:40:27.399402  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.399409  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:27.399415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:27.399481  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:27.427571  896760 cri.go:89] found id: ""
	I1208 00:40:27.427596  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.427604  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:27.427612  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:27.427621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:27.492713  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:27.492731  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.522269  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:27.522295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:27.582384  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:27.582402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:27.602834  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:27.602850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:27.689958  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:30.190338  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:30.201839  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:30.201909  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:30.228924  896760 cri.go:89] found id: ""
	I1208 00:40:30.228939  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.228956  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:30.228963  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:30.229026  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:30.255337  896760 cri.go:89] found id: ""
	I1208 00:40:30.255351  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.255358  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:30.255363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:30.255425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:30.281566  896760 cri.go:89] found id: ""
	I1208 00:40:30.281581  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.281588  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:30.281594  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:30.281655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:30.308175  896760 cri.go:89] found id: ""
	I1208 00:40:30.308189  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.308197  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:30.308202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:30.308282  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:30.336203  896760 cri.go:89] found id: ""
	I1208 00:40:30.336218  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.336226  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:30.336241  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:30.336302  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:30.368832  896760 cri.go:89] found id: ""
	I1208 00:40:30.368847  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.368855  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:30.368860  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:30.368940  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:30.396840  896760 cri.go:89] found id: ""
	I1208 00:40:30.396855  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.396862  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:30.396870  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:30.396880  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:30.458293  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:30.458313  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:30.489792  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:30.489807  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:30.546970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:30.546989  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:30.561949  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:30.561969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:30.648665  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.148954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:33.159678  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:33.159739  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:33.188692  896760 cri.go:89] found id: ""
	I1208 00:40:33.188707  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.188725  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:33.188731  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:33.188815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:33.214527  896760 cri.go:89] found id: ""
	I1208 00:40:33.214542  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.214550  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:33.214555  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:33.214614  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:33.241307  896760 cri.go:89] found id: ""
	I1208 00:40:33.241323  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.241331  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:33.241336  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:33.241395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:33.267242  896760 cri.go:89] found id: ""
	I1208 00:40:33.267257  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.267265  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:33.267271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:33.267331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:33.293623  896760 cri.go:89] found id: ""
	I1208 00:40:33.293637  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.293645  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:33.293650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:33.293710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:33.319375  896760 cri.go:89] found id: ""
	I1208 00:40:33.319388  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.319395  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:33.319401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:33.319477  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:33.345164  896760 cri.go:89] found id: ""
	I1208 00:40:33.345178  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.345186  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:33.345193  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:33.345203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:33.402766  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:33.402783  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:33.417559  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:33.417576  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:33.484831  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.484841  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:33.484851  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:33.553499  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:33.553527  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.087539  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:36.098484  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:36.098549  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:36.123061  896760 cri.go:89] found id: ""
	I1208 00:40:36.123075  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.123083  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:36.123089  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:36.123150  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:36.152786  896760 cri.go:89] found id: ""
	I1208 00:40:36.152800  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.152807  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:36.152813  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:36.152874  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:36.179122  896760 cri.go:89] found id: ""
	I1208 00:40:36.179137  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.179144  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:36.179150  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:36.179211  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:36.205226  896760 cri.go:89] found id: ""
	I1208 00:40:36.205239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.205247  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:36.205253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:36.205311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:36.231018  896760 cri.go:89] found id: ""
	I1208 00:40:36.231033  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.231040  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:36.231046  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:36.231104  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:36.257226  896760 cri.go:89] found id: ""
	I1208 00:40:36.257239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.257247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:36.257253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:36.257312  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:36.282378  896760 cri.go:89] found id: ""
	I1208 00:40:36.282395  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.282402  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:36.282411  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:36.282422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:36.297365  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:36.297381  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:36.361334  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:36.361345  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:36.361356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:36.425983  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:36.426003  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.458376  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:36.458391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.019300  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:39.030277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:39.030337  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:39.059011  896760 cri.go:89] found id: ""
	I1208 00:40:39.059026  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.059033  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:39.059039  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:39.059099  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:39.084787  896760 cri.go:89] found id: ""
	I1208 00:40:39.084802  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.084809  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:39.084815  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:39.084879  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:39.111166  896760 cri.go:89] found id: ""
	I1208 00:40:39.111179  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.111186  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:39.111192  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:39.111252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:39.140388  896760 cri.go:89] found id: ""
	I1208 00:40:39.140403  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.140410  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:39.140415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:39.140475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:39.165040  896760 cri.go:89] found id: ""
	I1208 00:40:39.165054  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.165062  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:39.165067  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:39.165130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:39.191099  896760 cri.go:89] found id: ""
	I1208 00:40:39.191114  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.191122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:39.191127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:39.191187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:39.215889  896760 cri.go:89] found id: ""
	I1208 00:40:39.215903  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.215910  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:39.215918  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:39.215934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.279738  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:39.279760  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:39.295091  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:39.295108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:39.363341  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:39.363363  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:39.363373  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:39.428022  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:39.428043  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.960665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:41.971071  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:41.971142  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:41.997224  896760 cri.go:89] found id: ""
	I1208 00:40:41.997239  896760 logs.go:282] 0 containers: []
	W1208 00:40:41.997247  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:41.997253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:41.997315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:42.035665  896760 cri.go:89] found id: ""
	I1208 00:40:42.035680  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.035687  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:42.035692  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:42.035758  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:42.064088  896760 cri.go:89] found id: ""
	I1208 00:40:42.064103  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.064111  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:42.064117  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:42.064181  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:42.092740  896760 cri.go:89] found id: ""
	I1208 00:40:42.092757  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.092765  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:42.092771  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:42.092844  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:42.124291  896760 cri.go:89] found id: ""
	I1208 00:40:42.124309  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.124321  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:42.124329  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:42.124428  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:42.155416  896760 cri.go:89] found id: ""
	I1208 00:40:42.155431  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.155439  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:42.155445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:42.155515  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:42.188921  896760 cri.go:89] found id: ""
	I1208 00:40:42.188938  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.188945  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:42.188954  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:42.188965  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:42.249292  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:42.249321  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:42.266137  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:42.266155  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:42.342321  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:42.342333  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:42.342344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:42.406583  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:42.406602  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:44.937561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:44.948618  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:44.948679  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:44.982163  896760 cri.go:89] found id: ""
	I1208 00:40:44.982177  896760 logs.go:282] 0 containers: []
	W1208 00:40:44.982195  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:44.982202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:44.982276  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:45.033982  896760 cri.go:89] found id: ""
	I1208 00:40:45.033999  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.034008  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:45.034014  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:45.034085  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:45.089336  896760 cri.go:89] found id: ""
	I1208 00:40:45.089353  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.089362  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:45.089368  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:45.089437  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:45.132530  896760 cri.go:89] found id: ""
	I1208 00:40:45.132547  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.132555  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:45.132561  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:45.132672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:45.207404  896760 cri.go:89] found id: ""
	I1208 00:40:45.207423  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.207432  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:45.207438  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:45.207516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:45.247451  896760 cri.go:89] found id: ""
	I1208 00:40:45.247477  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.247486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:45.247493  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:45.247562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:45.291347  896760 cri.go:89] found id: ""
	I1208 00:40:45.291363  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.291373  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:45.291382  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:45.291393  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:45.358718  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:45.358739  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:45.375670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:45.375694  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:45.443052  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:45.443063  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:45.443075  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:45.507120  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:45.507142  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.037423  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:48.048528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:48.048599  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:48.078292  896760 cri.go:89] found id: ""
	I1208 00:40:48.078307  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.078314  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:48.078320  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:48.078380  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:48.103852  896760 cri.go:89] found id: ""
	I1208 00:40:48.103867  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.103874  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:48.103879  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:48.103938  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:48.129348  896760 cri.go:89] found id: ""
	I1208 00:40:48.129364  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.129371  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:48.129376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:48.129434  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:48.154375  896760 cri.go:89] found id: ""
	I1208 00:40:48.154390  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.154397  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:48.154402  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:48.154497  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:48.180043  896760 cri.go:89] found id: ""
	I1208 00:40:48.180058  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.180065  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:48.180070  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:48.180126  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:48.208497  896760 cri.go:89] found id: ""
	I1208 00:40:48.208511  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.208518  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:48.208524  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:48.208582  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:48.236937  896760 cri.go:89] found id: ""
	I1208 00:40:48.236960  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.236968  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:48.236975  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:48.236985  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:48.252020  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:48.252037  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:48.317246  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:48.317257  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:48.317267  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:48.381926  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:48.381947  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.410384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:48.410402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:50.965799  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:50.977456  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:50.977516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:51.008659  896760 cri.go:89] found id: ""
	I1208 00:40:51.008677  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.008685  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:51.008691  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:51.008763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:51.043130  896760 cri.go:89] found id: ""
	I1208 00:40:51.043144  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.043151  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:51.043157  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:51.043217  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:51.071991  896760 cri.go:89] found id: ""
	I1208 00:40:51.072014  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.072022  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:51.072028  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:51.072091  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:51.098639  896760 cri.go:89] found id: ""
	I1208 00:40:51.098654  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.098661  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:51.098667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:51.098727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:51.125133  896760 cri.go:89] found id: ""
	I1208 00:40:51.125147  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.125154  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:51.125159  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:51.125220  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:51.152232  896760 cri.go:89] found id: ""
	I1208 00:40:51.152247  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.152255  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:51.152271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:51.152333  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:51.181299  896760 cri.go:89] found id: ""
	I1208 00:40:51.181313  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.181321  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:51.181329  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:51.181339  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:51.243933  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:51.243955  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:51.272384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:51.272400  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:51.334024  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:51.334042  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:51.349155  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:51.349172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:51.419857  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:53.920516  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:53.931349  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:53.931410  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:53.960789  896760 cri.go:89] found id: ""
	I1208 00:40:53.960805  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.960816  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:53.960821  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:53.960887  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:53.991352  896760 cri.go:89] found id: ""
	I1208 00:40:53.991368  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.991376  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:53.991382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:53.991452  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:54.024088  896760 cri.go:89] found id: ""
	I1208 00:40:54.024103  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.024117  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:54.024123  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:54.024187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:54.051247  896760 cri.go:89] found id: ""
	I1208 00:40:54.051262  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.051269  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:54.051274  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:54.051335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:54.077953  896760 cri.go:89] found id: ""
	I1208 00:40:54.077968  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.077975  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:54.077985  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:54.078051  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:54.104672  896760 cri.go:89] found id: ""
	I1208 00:40:54.104686  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.104693  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:54.104699  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:54.104757  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:54.129936  896760 cri.go:89] found id: ""
	I1208 00:40:54.129950  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.129957  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:54.129965  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:54.129976  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:54.190590  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:54.190610  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:54.206141  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:54.206158  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:54.274636  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:54.274647  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:54.274658  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:54.343673  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:54.343693  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:56.875691  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:56.887842  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:56.887906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:56.916158  896760 cri.go:89] found id: ""
	I1208 00:40:56.916172  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.916179  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:56.916185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:56.916243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:56.940916  896760 cri.go:89] found id: ""
	I1208 00:40:56.940930  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.940937  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:56.940942  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:56.941002  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:56.964346  896760 cri.go:89] found id: ""
	I1208 00:40:56.964361  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.964368  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:56.964373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:56.964431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:56.989502  896760 cri.go:89] found id: ""
	I1208 00:40:56.989516  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.989523  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:56.989528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:56.989590  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:57.017437  896760 cri.go:89] found id: ""
	I1208 00:40:57.017452  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.017459  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:57.017465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:57.017527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:57.044860  896760 cri.go:89] found id: ""
	I1208 00:40:57.044873  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.044880  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:57.044886  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:57.044943  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:57.070028  896760 cri.go:89] found id: ""
	I1208 00:40:57.070043  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.070050  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:57.070058  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:57.070069  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:57.133938  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:57.133960  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:57.163813  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:57.163828  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:57.219970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:57.219990  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:57.234793  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:57.234810  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:57.297123  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:59.797409  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:59.807447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:59.807521  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:59.831111  896760 cri.go:89] found id: ""
	I1208 00:40:59.831126  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.831139  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:59.831145  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:59.831204  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:59.862164  896760 cri.go:89] found id: ""
	I1208 00:40:59.862178  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.862185  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:59.862190  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:59.862245  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:59.913907  896760 cri.go:89] found id: ""
	I1208 00:40:59.913921  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.913928  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:59.913933  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:59.913990  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:59.938219  896760 cri.go:89] found id: ""
	I1208 00:40:59.938235  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.938242  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:59.938247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:59.938309  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:59.965447  896760 cri.go:89] found id: ""
	I1208 00:40:59.965460  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.965479  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:59.965485  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:59.965551  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:59.989806  896760 cri.go:89] found id: ""
	I1208 00:40:59.989820  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.989827  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:59.989833  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:59.989891  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:00.115094  896760 cri.go:89] found id: ""
	I1208 00:41:00.115110  896760 logs.go:282] 0 containers: []
	W1208 00:41:00.115118  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:00.115126  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:00.115138  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:00.211003  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:00.211027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:00.261522  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:00.261543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:00.334293  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:00.334316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:00.381440  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:00.381465  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:00.482780  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:02.983027  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:02.993616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:02.993677  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:03.021098  896760 cri.go:89] found id: ""
	I1208 00:41:03.021114  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.021122  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:03.021128  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:03.021189  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:03.047499  896760 cri.go:89] found id: ""
	I1208 00:41:03.047521  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.047528  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:03.047534  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:03.047594  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:03.072719  896760 cri.go:89] found id: ""
	I1208 00:41:03.072749  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.072757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:03.072762  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:03.072841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:03.098912  896760 cri.go:89] found id: ""
	I1208 00:41:03.098927  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.098934  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:03.098939  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:03.099001  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:03.125225  896760 cri.go:89] found id: ""
	I1208 00:41:03.125239  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.125247  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:03.125252  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:03.125311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:03.151371  896760 cri.go:89] found id: ""
	I1208 00:41:03.151384  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.151392  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:03.151397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:03.151457  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:03.176410  896760 cri.go:89] found id: ""
	I1208 00:41:03.176424  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.176432  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:03.176439  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:03.176450  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:03.231731  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:03.231750  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:03.246857  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:03.246874  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:03.313632  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:03.313651  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:03.313662  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:03.381170  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:03.381190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:05.911707  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:05.922187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:05.922249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:05.946678  896760 cri.go:89] found id: ""
	I1208 00:41:05.946692  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.946698  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:05.946704  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:05.946760  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:05.971332  896760 cri.go:89] found id: ""
	I1208 00:41:05.971344  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.971351  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:05.971357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:05.971418  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:05.996173  896760 cri.go:89] found id: ""
	I1208 00:41:05.996187  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.996194  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:05.996200  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:05.996257  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:06.029475  896760 cri.go:89] found id: ""
	I1208 00:41:06.029489  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.029497  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:06.029502  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:06.029578  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:06.058996  896760 cri.go:89] found id: ""
	I1208 00:41:06.059009  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.059017  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:06.059022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:06.059079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:06.083207  896760 cri.go:89] found id: ""
	I1208 00:41:06.083220  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.083227  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:06.083233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:06.083301  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:06.108817  896760 cri.go:89] found id: ""
	I1208 00:41:06.108831  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.108848  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:06.108856  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:06.108867  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:06.124009  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:06.124027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:06.189487  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:06.189497  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:06.189509  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:06.253352  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:06.253372  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:06.285932  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:06.285948  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:08.842570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:08.854529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:08.854589  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:08.884339  896760 cri.go:89] found id: ""
	I1208 00:41:08.884355  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.884362  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:08.884367  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:08.884427  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:08.914891  896760 cri.go:89] found id: ""
	I1208 00:41:08.914905  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.914924  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:08.914929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:08.914998  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:08.941436  896760 cri.go:89] found id: ""
	I1208 00:41:08.941452  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.941459  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:08.941465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:08.941535  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:08.966802  896760 cri.go:89] found id: ""
	I1208 00:41:08.966816  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.966823  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:08.966829  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:08.966890  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:09.002946  896760 cri.go:89] found id: ""
	I1208 00:41:09.002962  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.002971  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:09.002977  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:09.003049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:09.031184  896760 cri.go:89] found id: ""
	I1208 00:41:09.031199  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.031207  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:09.031213  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:09.031288  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:09.055946  896760 cri.go:89] found id: ""
	I1208 00:41:09.055971  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.055979  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:09.055987  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:09.055997  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:09.121830  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:09.121850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:09.150682  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:09.150700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:09.214609  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:09.214636  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:09.230018  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:09.230035  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:09.298095  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:11.798922  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:11.809212  896760 kubeadm.go:602] duration metric: took 4m1.466236852s to restartPrimaryControlPlane
	W1208 00:41:11.809278  896760 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:41:11.810440  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:41:12.224260  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:41:12.238539  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:41:12.247058  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:41:12.247114  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:41:12.255525  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:41:12.255534  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:41:12.255586  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:41:12.263892  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:41:12.263953  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:41:12.271955  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:41:12.280091  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:41:12.280149  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:41:12.288143  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.296120  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:41:12.296196  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.303946  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:41:12.312368  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:41:12.312423  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:41:12.320463  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:41:12.364373  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:41:12.364695  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:41:12.438406  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:41:12.438492  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:41:12.438531  896760 kubeadm.go:319] OS: Linux
	I1208 00:41:12.438577  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:41:12.438625  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:41:12.438672  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:41:12.438719  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:41:12.438766  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:41:12.438813  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:41:12.438857  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:41:12.438904  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:41:12.438949  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:41:12.514836  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:41:12.514942  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:41:12.515034  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:41:12.521560  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:41:12.527008  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:41:12.527099  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:41:12.527164  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:41:12.527241  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:41:12.527300  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:41:12.527369  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:41:12.527423  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:41:12.527485  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:41:12.527544  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:41:12.527617  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:41:12.527688  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:41:12.527724  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:41:12.527778  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:41:13.245010  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:41:13.299392  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:41:13.614595  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:41:13.963710  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:41:14.175279  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:41:14.176043  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:41:14.180186  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:41:14.183629  896760 out.go:252]   - Booting up control plane ...
	I1208 00:41:14.183729  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:41:14.183806  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:41:14.184436  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:41:14.204887  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:41:14.204990  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:41:14.213421  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:41:14.213704  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:41:14.213908  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:41:14.352082  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:41:14.352289  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:45:14.352397  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00008019s
	I1208 00:45:14.352432  896760 kubeadm.go:319] 
	I1208 00:45:14.352488  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:45:14.352520  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:45:14.352633  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:45:14.352639  896760 kubeadm.go:319] 
	I1208 00:45:14.352742  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:45:14.352774  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:45:14.352803  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:45:14.352807  896760 kubeadm.go:319] 
	I1208 00:45:14.356965  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:45:14.357429  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:45:14.357540  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:45:14.357802  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 00:45:14.357807  896760 kubeadm.go:319] 
	I1208 00:45:14.357875  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:45:14.357995  896760 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00008019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:45:14.358087  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:45:14.770086  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:45:14.783732  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:45:14.783788  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:45:14.791646  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:45:14.791657  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:45:14.791710  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:45:14.799512  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:45:14.799569  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:45:14.807303  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:45:14.815223  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:45:14.815280  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:45:14.822916  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.831219  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:45:14.831274  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.838751  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:45:14.846479  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:45:14.846535  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:45:14.855105  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:45:14.892727  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:45:14.893019  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:45:14.958827  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:45:14.958888  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:45:14.958921  896760 kubeadm.go:319] OS: Linux
	I1208 00:45:14.958963  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:45:14.959008  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:45:14.959052  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:45:14.959097  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:45:14.959143  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:45:14.959192  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:45:14.959234  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:45:14.959279  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:45:14.959321  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:45:15.063986  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:45:15.064091  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:45:15.064182  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:45:15.072119  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:45:15.073836  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:45:15.073929  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:45:15.073997  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:45:15.074078  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:45:15.074847  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:45:15.074919  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:45:15.074970  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:45:15.075029  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:45:15.075086  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:45:15.075260  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:45:15.075466  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:45:15.075788  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:45:15.075847  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:45:15.207541  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:45:15.419182  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:45:15.708081  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:45:15.925468  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:45:16.152957  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:45:16.153669  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:45:16.156472  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:45:16.157817  896760 out.go:252]   - Booting up control plane ...
	I1208 00:45:16.157909  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:45:16.157987  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:45:16.159025  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:45:16.179954  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:45:16.180052  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:45:16.189229  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:45:16.190665  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:45:16.190709  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:45:16.336970  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:45:16.337083  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:49:16.337272  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305556s
	I1208 00:49:16.337296  896760 kubeadm.go:319] 
	I1208 00:49:16.337409  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:49:16.337518  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:49:16.337839  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:49:16.337849  896760 kubeadm.go:319] 
	I1208 00:49:16.338164  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:49:16.338221  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:49:16.338281  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:49:16.338285  896760 kubeadm.go:319] 
	I1208 00:49:16.344611  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:49:16.345152  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:49:16.345268  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:49:16.345632  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:49:16.345641  896760 kubeadm.go:319] 
	I1208 00:49:16.345722  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:49:16.345780  896760 kubeadm.go:403] duration metric: took 12m6.045651138s to StartCluster
	I1208 00:49:16.345820  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:49:16.345897  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:49:16.378062  896760 cri.go:89] found id: ""
	I1208 00:49:16.378080  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.378088  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:49:16.378094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:49:16.378167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:49:16.414010  896760 cri.go:89] found id: ""
	I1208 00:49:16.414024  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.414043  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:49:16.414048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:49:16.414116  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:49:16.441708  896760 cri.go:89] found id: ""
	I1208 00:49:16.441732  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.441739  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:49:16.441745  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:49:16.441816  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:49:16.469812  896760 cri.go:89] found id: ""
	I1208 00:49:16.469826  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.469833  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:49:16.469848  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:49:16.469906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:49:16.495155  896760 cri.go:89] found id: ""
	I1208 00:49:16.495170  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.495177  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:49:16.495183  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:49:16.495242  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:49:16.522141  896760 cri.go:89] found id: ""
	I1208 00:49:16.522155  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.522163  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:49:16.522168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:49:16.522227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:49:16.551643  896760 cri.go:89] found id: ""
	I1208 00:49:16.551656  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.551663  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:49:16.551671  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:49:16.551681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:49:16.614342  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:49:16.614362  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:49:16.644124  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:49:16.644140  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:49:16.703646  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:49:16.703665  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:49:16.718513  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:49:16.718530  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:49:16.782371  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 00:49:16.782383  896760 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:49:16.782409  896760 out.go:285] * 
	W1208 00:49:16.782515  896760 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.782535  896760 out.go:285] * 
	W1208 00:49:16.784660  896760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:49:16.789524  896760 out.go:203] 
	W1208 00:49:16.792367  896760 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.792413  896760 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:49:16.792436  896760 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:49:16.795713  896760 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919395045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919406500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919452219Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919473840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919487707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919499637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919508720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919528249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919545578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919576815Z" level=info msg="Connect containerd service"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919974424Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.920657812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935258404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935352461Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935383608Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935425134Z" level=info msg="Start recovering state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981163284Z" level=info msg="Start event monitor"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981372805Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981441769Z" level=info msg="Start streaming server"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981512023Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981572914Z" level=info msg="runtime interface starting up..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981643085Z" level=info msg="starting plugins..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981710277Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981908794Z" level=info msg="containerd successfully booted in 0.086733s"
	Dec 08 00:37:08 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:49:20.258592   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:20.259394   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:20.261046   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:20.261688   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:20.263397   21197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:49:20 up  5:31,  0 user,  load average: 0.25, 0.20, 0.59
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 00:49:17 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:17 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:17 functional-386544 kubelet[21029]: E1208 00:49:17.910015   21029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:17 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:18 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 08 00:49:18 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:18 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:18 functional-386544 kubelet[21073]: E1208 00:49:18.676366   21073 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:18 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:18 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:19 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 08 00:49:19 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:19 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:19 functional-386544 kubelet[21107]: E1208 00:49:19.353951   21107 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:19 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:19 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:49:20 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 08 00:49:20 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:20 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:49:20 functional-386544 kubelet[21176]: E1208 00:49:20.162750   21176 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:49:20 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:49:20 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (397.057202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-386544 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-386544 apply -f testdata/invalidsvc.yaml: exit status 1 (56.022724ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-386544 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-386544 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-386544 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-386544 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-386544 --alsologtostderr -v=1] stderr:
I1208 00:51:29.514783  914036 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:29.514952  914036 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:29.514965  914036 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:29.514970  914036 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:29.515231  914036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:29.515521  914036 mustload.go:66] Loading cluster: functional-386544
I1208 00:51:29.515941  914036 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:29.516425  914036 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:29.534405  914036 host.go:66] Checking if "functional-386544" exists ...
I1208 00:51:29.534772  914036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:51:29.592084  914036 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.582703844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:51:29.592214  914036 api_server.go:166] Checking apiserver status ...
I1208 00:51:29.592286  914036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1208 00:51:29.592337  914036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:29.609342  914036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
W1208 00:51:29.716390  914036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1208 00:51:29.719512  914036 out.go:179] * The control-plane node functional-386544 apiserver is not running: (state=Stopped)
I1208 00:51:29.722501  914036 out.go:179]   To start a cluster, run: "minikube start -p functional-386544"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (374.945718ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-386544 service hello-node --url --format={{.IP}}                                                                                         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ service   │ functional-386544 service hello-node --url                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001:/mount-9p --alsologtostderr -v=1              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh -- ls -la /mount-9p                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh cat /mount-9p/test-1765155080131625029                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh sudo umount -f /mount-9p                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo222343014/001:/mount-9p --alsologtostderr -v=1 --port 46464  │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh -- ls -la /mount-9p                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh sudo umount -f /mount-9p                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount1 --alsologtostderr -v=1                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount2 --alsologtostderr -v=1                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount3 --alsologtostderr -v=1                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh findmnt -T /mount1                                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh findmnt -T /mount2                                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh findmnt -T /mount3                                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ mount     │ -p functional-386544 --kill=true                                                                                                                    │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ start     │ -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ start     │ -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ start     │ -p functional-386544 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-386544 --alsologtostderr -v=1                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:51:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:51:29.270661  913965 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:51:29.270858  913965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.270877  913965 out.go:374] Setting ErrFile to fd 2...
	I1208 00:51:29.270883  913965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.271187  913965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:51:29.271612  913965 out.go:368] Setting JSON to false
	I1208 00:51:29.272534  913965 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":20042,"bootTime":1765135047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:51:29.272617  913965 start.go:143] virtualization:  
	I1208 00:51:29.275838  913965 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:51:29.279436  913965 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:51:29.279577  913965 notify.go:221] Checking for updates...
	I1208 00:51:29.285227  913965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:51:29.288262  913965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:51:29.291134  913965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:51:29.293999  913965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:51:29.296847  913965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:51:29.300084  913965 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:51:29.300777  913965 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:51:29.325912  913965 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:51:29.326034  913965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.396704  913965 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.386909367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.396816  913965 docker.go:319] overlay module found
	I1208 00:51:29.399947  913965 out.go:179] * Using the docker driver based on existing profile
	I1208 00:51:29.402712  913965 start.go:309] selected driver: docker
	I1208 00:51:29.402732  913965 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.402834  913965 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:51:29.402950  913965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.458342  913965 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.448170088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.458809  913965 cni.go:84] Creating CNI manager for ""
	I1208 00:51:29.458888  913965 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:51:29.458933  913965 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.461944  913965 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919395045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919406500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919452219Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919473840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919487707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919499637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919508720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919528249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919545578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919576815Z" level=info msg="Connect containerd service"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919974424Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.920657812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935258404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935352461Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935383608Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935425134Z" level=info msg="Start recovering state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981163284Z" level=info msg="Start event monitor"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981372805Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981441769Z" level=info msg="Start streaming server"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981512023Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981572914Z" level=info msg="runtime interface starting up..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981643085Z" level=info msg="starting plugins..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981710277Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981908794Z" level=info msg="containerd successfully booted in 0.086733s"
	Dec 08 00:37:08 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:51:30.853539   23315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:30.854397   23315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:30.856257   23315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:30.856964   23315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:30.857973   23315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:51:30 up  5:34,  0 user,  load average: 0.48, 0.28, 0.56
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:51:27 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:28 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 08 00:51:28 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:28 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:28 functional-386544 kubelet[23174]: E1208 00:51:28.413814   23174 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:28 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:28 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:29 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 08 00:51:29 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:29 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:29 functional-386544 kubelet[23194]: E1208 00:51:29.169218   23194 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:29 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:29 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:29 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 08 00:51:29 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:29 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:29 functional-386544 kubelet[23208]: E1208 00:51:29.918059   23208 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:29 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:29 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:30 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 08 00:51:30 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:30 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:30 functional-386544 kubelet[23257]: E1208 00:51:30.656327   23257 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:30 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:30 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (325.469838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 status: exit status 2 (307.601137ms)

                                                
                                                
-- stdout --
	functional-386544
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-386544 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (311.023998ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-386544 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 status -o json: exit status 2 (320.545269ms)

                                                
                                                
-- stdout --
	{"Name":"functional-386544","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-386544 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (343.958108ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-386544 addons list -o json                                                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ service │ functional-386544 service list                                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ service │ functional-386544 service list -o json                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ service │ functional-386544 service --namespace=default --https --url hello-node                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ service │ functional-386544 service hello-node --url --format={{.IP}}                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ service │ functional-386544 service hello-node --url                                                                                                         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount   │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001:/mount-9p --alsologtostderr -v=1             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh     │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh     │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh -- ls -la /mount-9p                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh cat /mount-9p/test-1765155080131625029                                                                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh     │ functional-386544 ssh sudo umount -f /mount-9p                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ mount   │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo222343014/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh     │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh     │ functional-386544 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh -- ls -la /mount-9p                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh sudo umount -f /mount-9p                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount   │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount1 --alsologtostderr -v=1               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount   │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount2 --alsologtostderr -v=1               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount   │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount3 --alsologtostderr -v=1               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh     │ functional-386544 ssh findmnt -T /mount1                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh findmnt -T /mount2                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh     │ functional-386544 ssh findmnt -T /mount3                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ mount   │ -p functional-386544 --kill=true                                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:37:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:37:06.019721  896760 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:37:06.019851  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.019855  896760 out.go:374] Setting ErrFile to fd 2...
	I1208 00:37:06.019858  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.020163  896760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:37:06.020664  896760 out.go:368] Setting JSON to false
	I1208 00:37:06.021613  896760 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":19179,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:37:06.021695  896760 start.go:143] virtualization:  
	I1208 00:37:06.025173  896760 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:37:06.029087  896760 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:37:06.029181  896760 notify.go:221] Checking for updates...
	I1208 00:37:06.035043  896760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:37:06.037984  896760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:37:06.041170  896760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:37:06.044080  896760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:37:06.047053  896760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:37:06.050554  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:06.050663  896760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:37:06.082313  896760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:37:06.082426  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.147928  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.138471154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.148036  896760 docker.go:319] overlay module found
	I1208 00:37:06.151011  896760 out.go:179] * Using the docker driver based on existing profile
	I1208 00:37:06.153817  896760 start.go:309] selected driver: docker
	I1208 00:37:06.153826  896760 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.153925  896760 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:37:06.154035  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.211588  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.202265066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.212013  896760 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:37:06.212038  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:06.212099  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:06.212152  896760 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.217244  896760 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:37:06.220210  896760 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:37:06.223461  896760 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:37:06.226522  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:06.226581  896760 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:37:06.226589  896760 cache.go:65] Caching tarball of preloaded images
	I1208 00:37:06.226692  896760 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:37:06.226679  896760 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:37:06.226706  896760 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:37:06.226817  896760 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:37:06.250884  896760 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:37:06.250894  896760 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:37:06.250908  896760 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:37:06.250945  896760 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:37:06.250999  896760 start.go:364] duration metric: took 38.401µs to acquireMachinesLock for "functional-386544"
	I1208 00:37:06.251017  896760 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:37:06.251022  896760 fix.go:54] fixHost starting: 
	I1208 00:37:06.251283  896760 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:37:06.268900  896760 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:37:06.268920  896760 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:37:06.272102  896760 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:37:06.272127  896760 machine.go:94] provisionDockerMachine start ...
	I1208 00:37:06.272215  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.289500  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.289831  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.289837  896760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:37:06.446749  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.446764  896760 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:37:06.446826  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.466658  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.466960  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.466968  896760 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:37:06.637199  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.637280  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.656923  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.657245  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.657259  896760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:37:06.810893  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:37:06.810908  896760 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:37:06.810925  896760 ubuntu.go:190] setting up certificates
	I1208 00:37:06.810935  896760 provision.go:84] configureAuth start
	I1208 00:37:06.811016  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:06.829686  896760 provision.go:143] copyHostCerts
	I1208 00:37:06.829765  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:37:06.829784  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:37:06.829861  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:37:06.829960  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:37:06.829964  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:37:06.829992  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:37:06.830039  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:37:06.830042  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:37:06.830063  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:37:06.830106  896760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:37:07.178648  896760 provision.go:177] copyRemoteCerts
	I1208 00:37:07.178704  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:37:07.178748  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.196483  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.308383  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:37:07.329033  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:37:07.348621  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:37:07.367658  896760 provision.go:87] duration metric: took 556.701814ms to configureAuth
	I1208 00:37:07.367675  896760 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:37:07.367867  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:07.367872  896760 machine.go:97] duration metric: took 1.095740792s to provisionDockerMachine
	I1208 00:37:07.367878  896760 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:37:07.367889  896760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:37:07.367938  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:37:07.367977  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.392993  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.498867  896760 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:37:07.502617  896760 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:37:07.502635  896760 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:37:07.502647  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:37:07.502710  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:37:07.502786  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:37:07.502867  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:37:07.502912  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:37:07.511139  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:07.530267  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:37:07.549480  896760 start.go:296] duration metric: took 181.586948ms for postStartSetup
	I1208 00:37:07.549558  896760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:37:07.549616  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.567759  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.671740  896760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:37:07.676721  896760 fix.go:56] duration metric: took 1.425689657s for fixHost
	I1208 00:37:07.676741  896760 start.go:83] releasing machines lock for "functional-386544", held for 1.425734498s
	I1208 00:37:07.676811  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:07.694624  896760 ssh_runner.go:195] Run: cat /version.json
	I1208 00:37:07.694669  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.694717  896760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:37:07.694775  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.720790  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.720932  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.911560  896760 ssh_runner.go:195] Run: systemctl --version
	I1208 00:37:07.918241  896760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:37:07.922676  896760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:37:07.922750  896760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:37:07.930831  896760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:37:07.930844  896760 start.go:496] detecting cgroup driver to use...
	I1208 00:37:07.930875  896760 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:37:07.930921  896760 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:37:07.947115  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:37:07.961050  896760 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:37:07.961113  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:37:07.977365  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:37:07.991192  896760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:37:08.126175  896760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:37:08.269608  896760 docker.go:234] disabling docker service ...
	I1208 00:37:08.269664  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:37:08.284945  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:37:08.299108  896760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:37:08.432565  896760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:37:08.555248  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:37:08.569474  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:37:08.585412  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:37:08.595004  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:37:08.604840  896760 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:37:08.604902  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:37:08.613812  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.623203  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:37:08.633142  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.643038  896760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:37:08.652239  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:37:08.661623  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:37:08.671250  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:37:08.680657  896760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:37:08.688616  896760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:37:08.696764  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:08.823042  896760 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:37:08.984184  896760 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:37:08.984277  896760 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:37:08.989088  896760 start.go:564] Will wait 60s for crictl version
	I1208 00:37:08.989158  896760 ssh_runner.go:195] Run: which crictl
	I1208 00:37:08.993493  896760 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:37:09.024246  896760 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:37:09.024323  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.048155  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.074342  896760 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:37:09.077377  896760 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:37:09.094080  896760 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:37:09.101988  896760 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:37:09.104771  896760 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:37:09.104921  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:09.104997  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.131121  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.131133  896760 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:37:09.131193  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.156235  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.156250  896760 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:37:09.156277  896760 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:37:09.156381  896760 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:37:09.156452  896760 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:37:09.182781  896760 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:37:09.182799  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:09.182812  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:09.182826  896760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:37:09.182847  896760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:37:09.182951  896760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:37:09.183025  896760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:37:09.190958  896760 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:37:09.191018  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:37:09.198701  896760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:37:09.211735  896760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:37:09.225024  896760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1208 00:37:09.237969  896760 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:37:09.241818  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:09.362221  896760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:37:09.592794  896760 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:37:09.592805  896760 certs.go:195] generating shared ca certs ...
	I1208 00:37:09.592820  896760 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:37:09.592963  896760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:37:09.593013  896760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:37:09.593019  896760 certs.go:257] generating profile certs ...
	I1208 00:37:09.593102  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:37:09.593154  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:37:09.593193  896760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:37:09.593299  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:37:09.593334  896760 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:37:09.593340  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:37:09.593370  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:37:09.593392  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:37:09.593414  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:37:09.593455  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:09.594053  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:37:09.614864  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:37:09.633613  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:37:09.652858  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:37:09.672208  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:37:09.691703  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:37:09.711394  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:37:09.730947  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:37:09.750211  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:37:09.769149  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:37:09.787710  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:37:09.806312  896760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:37:09.821128  896760 ssh_runner.go:195] Run: openssl version
	I1208 00:37:09.827672  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.835407  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:37:09.843631  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847882  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847954  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.890017  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:37:09.897920  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.905917  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:37:09.913958  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918017  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918088  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.960169  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:37:09.968154  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.975996  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:37:09.984080  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988210  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988283  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:37:10.030981  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:37:10.040434  896760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:37:10.045482  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:37:10.089037  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:37:10.131753  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:37:10.174120  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:37:10.216988  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:37:10.258490  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:37:10.300139  896760 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:10.300218  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:37:10.300290  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.333072  896760 cri.go:89] found id: ""
	I1208 00:37:10.333133  896760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:37:10.342949  896760 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:37:10.342966  896760 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:37:10.343020  896760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:37:10.351917  896760 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.352512  896760 kubeconfig.go:125] found "functional-386544" server: "https://192.168.49.2:8441"
	I1208 00:37:10.356488  896760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:37:10.371422  896760 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:22:35.509962182 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:37:09.232874988 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:37:10.371434  896760 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:37:10.371448  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1208 00:37:10.371510  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.399030  896760 cri.go:89] found id: ""
	I1208 00:37:10.399096  896760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:37:10.416716  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:37:10.425417  896760 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  8 00:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  8 00:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  8 00:26 /etc/kubernetes/scheduler.conf
	
	I1208 00:37:10.425491  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:37:10.433870  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:37:10.441918  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.441981  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:37:10.450104  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.458339  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.458406  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.466222  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:37:10.474083  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.474143  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:37:10.482138  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:37:10.490230  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:10.544026  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.386589  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.605461  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.662330  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.710396  896760 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:37:11.710500  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.210751  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.710625  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.211368  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.710629  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.210663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.710895  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.211137  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.711373  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.211351  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.710569  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.210608  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.710907  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.211191  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.710689  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.210845  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.710623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.211163  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.711542  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.210600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.710988  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.210661  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.710658  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.210891  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.711295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.210648  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.710685  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.211112  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.711299  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.210714  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.710657  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.710683  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.210651  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.711193  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.210592  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.710674  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.211143  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.711278  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.211249  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.711431  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.211577  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.711520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.711607  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.210653  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.711085  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.211213  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.210570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.710652  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.210844  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.710667  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.210595  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.710997  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.210972  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.710639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.211558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.711501  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.211600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.711606  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.211418  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.711303  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.210746  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.710559  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.210639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.710659  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.211497  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.711558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.211385  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.710636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.210636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.710883  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.211287  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.210917  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.710809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.210623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.710673  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.210672  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.210578  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.710671  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.210617  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.711226  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.211295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.711314  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.211406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.711446  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.211464  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.710703  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.211414  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.711319  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.210772  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.710561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.211386  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.710908  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.211262  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.710640  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.211590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.710555  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.211517  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.711490  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.211619  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.710621  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.710668  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.211521  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.711350  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.211355  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.711224  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.211378  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.710638  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.210954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.710606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:11.710708  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:11.736420  896760 cri.go:89] found id: ""
	I1208 00:38:11.736434  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.736442  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:11.736447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:11.736514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:11.760217  896760 cri.go:89] found id: ""
	I1208 00:38:11.760231  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.760238  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:11.760243  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:11.760300  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:11.784869  896760 cri.go:89] found id: ""
	I1208 00:38:11.784882  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.784895  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:11.784900  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:11.784963  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:11.809329  896760 cri.go:89] found id: ""
	I1208 00:38:11.809345  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.809352  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:11.809357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:11.809412  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:11.836937  896760 cri.go:89] found id: ""
	I1208 00:38:11.836951  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.836958  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:11.836964  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:11.837022  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:11.861979  896760 cri.go:89] found id: ""
	I1208 00:38:11.861993  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.862000  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:11.862006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:11.862067  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:11.891173  896760 cri.go:89] found id: ""
	I1208 00:38:11.891187  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.891194  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:11.891202  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:11.891213  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:11.958401  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:11.958411  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:11.958422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:12.022654  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:12.022674  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:12.054077  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:12.054093  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:12.115415  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:12.115439  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.631602  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:14.646925  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:14.646987  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:14.674503  896760 cri.go:89] found id: ""
	I1208 00:38:14.674517  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.674524  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:14.674529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:14.674593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:14.699396  896760 cri.go:89] found id: ""
	I1208 00:38:14.699419  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.699426  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:14.699432  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:14.699503  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:14.724021  896760 cri.go:89] found id: ""
	I1208 00:38:14.724034  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.724042  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:14.724047  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:14.724106  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:14.753658  896760 cri.go:89] found id: ""
	I1208 00:38:14.753672  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.753679  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:14.753684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:14.753749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:14.781621  896760 cri.go:89] found id: ""
	I1208 00:38:14.781635  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.781643  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:14.781649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:14.781707  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:14.807494  896760 cri.go:89] found id: ""
	I1208 00:38:14.807509  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.807516  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:14.807521  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:14.807593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:14.833097  896760 cri.go:89] found id: ""
	I1208 00:38:14.833112  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.833119  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:14.833126  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:14.833136  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:14.889095  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:14.889114  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.903785  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:14.903800  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:14.971093  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:14.971115  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:14.971126  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:15.034725  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:15.034748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:17.575615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:17.586181  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:17.586244  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:17.614489  896760 cri.go:89] found id: ""
	I1208 00:38:17.614503  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.614510  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:17.614516  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:17.614591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:17.646217  896760 cri.go:89] found id: ""
	I1208 00:38:17.646238  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.646245  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:17.646250  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:17.646320  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:17.672670  896760 cri.go:89] found id: ""
	I1208 00:38:17.672684  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.672699  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:17.672705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:17.672771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:17.697872  896760 cri.go:89] found id: ""
	I1208 00:38:17.697886  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.697894  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:17.697899  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:17.697960  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:17.723061  896760 cri.go:89] found id: ""
	I1208 00:38:17.723075  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.723083  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:17.723088  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:17.723148  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:17.751201  896760 cri.go:89] found id: ""
	I1208 00:38:17.751215  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.751257  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:17.751263  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:17.751327  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:17.776877  896760 cri.go:89] found id: ""
	I1208 00:38:17.776898  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.776906  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:17.776914  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:17.776924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:17.833629  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:17.833648  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:17.848545  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:17.848562  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:17.916466  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:17.916477  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:17.916488  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:17.977728  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:17.977748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:20.518003  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:20.528606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:20.528668  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:20.553281  896760 cri.go:89] found id: ""
	I1208 00:38:20.553294  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.553301  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:20.553307  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:20.553362  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:20.578221  896760 cri.go:89] found id: ""
	I1208 00:38:20.578241  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.578249  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:20.578254  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:20.578315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:20.615636  896760 cri.go:89] found id: ""
	I1208 00:38:20.615650  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.615657  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:20.615662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:20.615717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:20.658083  896760 cri.go:89] found id: ""
	I1208 00:38:20.658097  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.658104  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:20.658109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:20.658167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:20.683361  896760 cri.go:89] found id: ""
	I1208 00:38:20.683375  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.683382  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:20.683387  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:20.683445  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:20.708740  896760 cri.go:89] found id: ""
	I1208 00:38:20.708754  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.708761  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:20.708767  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:20.708830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:20.733148  896760 cri.go:89] found id: ""
	I1208 00:38:20.733162  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.733169  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:20.733177  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:20.733187  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:20.789345  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:20.789364  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:20.804329  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:20.804344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:20.869258  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:20.869270  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:20.869280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:20.935198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:20.935220  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:23.463419  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:23.473440  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:23.473514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:23.498380  896760 cri.go:89] found id: ""
	I1208 00:38:23.498395  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.498402  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:23.498407  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:23.498504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:23.524663  896760 cri.go:89] found id: ""
	I1208 00:38:23.524677  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.524683  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:23.524689  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:23.524749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:23.554276  896760 cri.go:89] found id: ""
	I1208 00:38:23.554300  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.554308  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:23.554314  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:23.554373  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:23.581295  896760 cri.go:89] found id: ""
	I1208 00:38:23.581310  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.581317  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:23.581322  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:23.581394  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:23.609485  896760 cri.go:89] found id: ""
	I1208 00:38:23.609499  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.609506  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:23.609512  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:23.609568  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:23.639329  896760 cri.go:89] found id: ""
	I1208 00:38:23.639343  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.639350  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:23.639356  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:23.639415  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:23.666775  896760 cri.go:89] found id: ""
	I1208 00:38:23.666789  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.666796  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:23.666804  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:23.666816  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:23.726052  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:23.726071  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:23.741283  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:23.741300  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:23.814882  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:23.814894  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:23.814918  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:23.882172  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:23.882191  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.416809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:26.427382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:26.427441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:26.455816  896760 cri.go:89] found id: ""
	I1208 00:38:26.455831  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.455838  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:26.455843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:26.455901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:26.481460  896760 cri.go:89] found id: ""
	I1208 00:38:26.481475  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.481482  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:26.481487  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:26.481552  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:26.511736  896760 cri.go:89] found id: ""
	I1208 00:38:26.511750  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.511757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:26.511764  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:26.511824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:26.538164  896760 cri.go:89] found id: ""
	I1208 00:38:26.538185  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.538192  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:26.538197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:26.538263  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:26.564400  896760 cri.go:89] found id: ""
	I1208 00:38:26.564415  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.564423  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:26.564428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:26.564499  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:26.592652  896760 cri.go:89] found id: ""
	I1208 00:38:26.592666  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.592684  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:26.592690  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:26.592756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:26.628887  896760 cri.go:89] found id: ""
	I1208 00:38:26.628913  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.628920  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:26.628928  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:26.628939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:26.645510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:26.645526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:26.715196  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:26.715212  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:26.715223  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:26.776374  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:26.776415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.805091  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:26.805108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.367761  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:29.378770  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:29.378841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:29.407904  896760 cri.go:89] found id: ""
	I1208 00:38:29.407918  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.407925  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:29.407937  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:29.407996  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:29.439249  896760 cri.go:89] found id: ""
	I1208 00:38:29.439263  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.439270  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:29.439275  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:29.439335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:29.464738  896760 cri.go:89] found id: ""
	I1208 00:38:29.464752  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.464760  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:29.464765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:29.464821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:29.491063  896760 cri.go:89] found id: ""
	I1208 00:38:29.491077  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.491085  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:29.491094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:29.491170  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:29.516981  896760 cri.go:89] found id: ""
	I1208 00:38:29.516995  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.517003  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:29.517008  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:29.517068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:29.542623  896760 cri.go:89] found id: ""
	I1208 00:38:29.542637  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.542644  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:29.542649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:29.542706  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:29.568339  896760 cri.go:89] found id: ""
	I1208 00:38:29.568354  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.568361  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:29.568368  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:29.568377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.628127  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:29.628145  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:29.643477  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:29.643493  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:29.719175  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:29.719187  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:29.719198  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:29.782292  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:29.782317  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.310785  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:32.321344  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:32.321408  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:32.347142  896760 cri.go:89] found id: ""
	I1208 00:38:32.347156  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.347163  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:32.347184  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:32.347243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:32.372733  896760 cri.go:89] found id: ""
	I1208 00:38:32.372748  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.372784  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:32.372789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:32.372848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:32.397366  896760 cri.go:89] found id: ""
	I1208 00:38:32.397381  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.397388  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:32.397394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:32.397458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:32.422998  896760 cri.go:89] found id: ""
	I1208 00:38:32.423012  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.423019  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:32.423025  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:32.423092  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:32.454075  896760 cri.go:89] found id: ""
	I1208 00:38:32.454089  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.454096  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:32.454102  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:32.454163  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:32.480907  896760 cri.go:89] found id: ""
	I1208 00:38:32.480931  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.480938  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:32.480945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:32.481033  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:32.508537  896760 cri.go:89] found id: ""
	I1208 00:38:32.508551  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.508559  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:32.508567  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:32.508577  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.536959  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:32.536977  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:32.594663  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:32.594683  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:32.611007  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:32.611023  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:32.685259  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:32.685271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:32.685293  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.252296  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.262679  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:35.262743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:35.288362  896760 cri.go:89] found id: ""
	I1208 00:38:35.288376  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.288384  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:35.288389  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:35.288459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:35.316681  896760 cri.go:89] found id: ""
	I1208 00:38:35.316694  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.316702  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:35.316708  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:35.316771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:35.341646  896760 cri.go:89] found id: ""
	I1208 00:38:35.341661  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.341668  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:35.341673  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:35.341737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:35.367257  896760 cri.go:89] found id: ""
	I1208 00:38:35.367271  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.367278  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:35.367284  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:35.367343  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:35.391511  896760 cri.go:89] found id: ""
	I1208 00:38:35.391526  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.391533  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:35.391538  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:35.391607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:35.416046  896760 cri.go:89] found id: ""
	I1208 00:38:35.416059  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.416067  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:35.416073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:35.416186  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:35.441892  896760 cri.go:89] found id: ""
	I1208 00:38:35.441906  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.441913  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:35.441921  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:35.441930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:35.498141  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:35.498159  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:35.513190  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:35.513206  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:35.577909  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:35.577920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:35.577930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.650521  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:35.650540  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:38.186415  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.196707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:38.196765  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:38.224642  896760 cri.go:89] found id: ""
	I1208 00:38:38.224656  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.224662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:38.224667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:38.224727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:38.250371  896760 cri.go:89] found id: ""
	I1208 00:38:38.250385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.250393  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:38.250397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:38.250490  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:38.275798  896760 cri.go:89] found id: ""
	I1208 00:38:38.275813  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.275820  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:38.275825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:38.275889  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:38.301371  896760 cri.go:89] found id: ""
	I1208 00:38:38.301385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.301393  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:38.301398  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:38.301458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:38.326436  896760 cri.go:89] found id: ""
	I1208 00:38:38.326475  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.326483  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:38.326489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:38.326548  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:38.352684  896760 cri.go:89] found id: ""
	I1208 00:38:38.352698  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.352705  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:38.352711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:38.352770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:38.377358  896760 cri.go:89] found id: ""
	I1208 00:38:38.377372  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.377379  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:38.377424  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:38.377434  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:38.433300  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:38.433319  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:38.448010  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:38.448031  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:38.509419  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:38.509429  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:38.509441  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:38.573641  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:38.573660  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:41.124146  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.134622  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:41.134687  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:41.158822  896760 cri.go:89] found id: ""
	I1208 00:38:41.158837  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.158844  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:41.158850  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:41.158907  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:41.183538  896760 cri.go:89] found id: ""
	I1208 00:38:41.183552  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.183559  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:41.183564  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:41.183621  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:41.211762  896760 cri.go:89] found id: ""
	I1208 00:38:41.211776  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.211783  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:41.211789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:41.211846  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:41.237660  896760 cri.go:89] found id: ""
	I1208 00:38:41.237674  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.237681  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:41.237687  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:41.237746  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:41.263629  896760 cri.go:89] found id: ""
	I1208 00:38:41.263644  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.263651  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:41.263656  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:41.263715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:41.289465  896760 cri.go:89] found id: ""
	I1208 00:38:41.289479  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.289486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:41.289498  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:41.289559  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:41.316932  896760 cri.go:89] found id: ""
	I1208 00:38:41.316948  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.316955  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:41.316963  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:41.316974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:41.380746  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:41.380766  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:41.395918  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:41.395934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:41.460910  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:41.460920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:41.460932  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:41.524405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:41.524433  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.057087  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.067409  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:44.067469  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:44.092978  896760 cri.go:89] found id: ""
	I1208 00:38:44.092992  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.093000  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:44.093005  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:44.093063  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:44.118425  896760 cri.go:89] found id: ""
	I1208 00:38:44.118439  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.118468  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:44.118473  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:44.118537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:44.147582  896760 cri.go:89] found id: ""
	I1208 00:38:44.147597  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.147605  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:44.147610  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:44.147672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:44.173039  896760 cri.go:89] found id: ""
	I1208 00:38:44.173052  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.173060  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:44.173066  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:44.173122  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:44.200035  896760 cri.go:89] found id: ""
	I1208 00:38:44.200048  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.200056  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:44.200064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:44.200124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:44.228628  896760 cri.go:89] found id: ""
	I1208 00:38:44.228643  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.228652  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:44.228658  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:44.228723  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:44.253637  896760 cri.go:89] found id: ""
	I1208 00:38:44.253651  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.253658  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:44.253666  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:44.253678  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.285985  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:44.286001  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:44.342819  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:44.342837  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:44.357562  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:44.357578  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:44.424802  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:44.424813  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:44.424823  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:46.987663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.998722  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:46.998782  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:47.028919  896760 cri.go:89] found id: ""
	I1208 00:38:47.028933  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.028941  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:47.028947  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:47.029019  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:47.054503  896760 cri.go:89] found id: ""
	I1208 00:38:47.054517  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.054524  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:47.054529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:47.054591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:47.080198  896760 cri.go:89] found id: ""
	I1208 00:38:47.080213  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.080220  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:47.080226  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:47.080295  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:47.109584  896760 cri.go:89] found id: ""
	I1208 00:38:47.109600  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.109615  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:47.109621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:47.109705  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:47.140105  896760 cri.go:89] found id: ""
	I1208 00:38:47.140121  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.140128  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:47.140134  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:47.140194  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:47.170105  896760 cri.go:89] found id: ""
	I1208 00:38:47.170119  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.170126  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:47.170131  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:47.170192  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:47.194381  896760 cri.go:89] found id: ""
	I1208 00:38:47.194396  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.194403  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:47.194411  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:47.194421  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:47.250853  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:47.250872  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:47.265858  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:47.265878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:47.337098  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:47.337113  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:47.337129  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:47.400033  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:47.400053  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:49.930600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.941210  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:49.941272  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:49.965928  896760 cri.go:89] found id: ""
	I1208 00:38:49.965942  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.965949  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:49.965954  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:49.966013  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:49.991571  896760 cri.go:89] found id: ""
	I1208 00:38:49.991585  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.991592  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:49.991597  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:49.991661  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:50.031199  896760 cri.go:89] found id: ""
	I1208 00:38:50.031218  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.031226  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:50.031233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:50.031308  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:50.058807  896760 cri.go:89] found id: ""
	I1208 00:38:50.058822  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.058830  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:50.058836  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:50.058898  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:50.089259  896760 cri.go:89] found id: ""
	I1208 00:38:50.089273  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.089281  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:50.089287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:50.089360  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:50.115363  896760 cri.go:89] found id: ""
	I1208 00:38:50.115377  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.115385  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:50.115391  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:50.115454  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:50.144975  896760 cri.go:89] found id: ""
	I1208 00:38:50.144990  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.144998  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:50.145006  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:50.145020  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:50.160213  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:50.160230  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:50.226659  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:50.226669  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:50.226681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:50.288844  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:50.288865  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:50.321807  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:50.321824  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:52.878758  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.892078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:52.892141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:52.919955  896760 cri.go:89] found id: ""
	I1208 00:38:52.919969  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.919977  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:52.919982  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:52.920041  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:52.946242  896760 cri.go:89] found id: ""
	I1208 00:38:52.946256  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.946264  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:52.946269  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:52.946331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:52.976452  896760 cri.go:89] found id: ""
	I1208 00:38:52.976467  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.976475  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:52.976480  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:52.976542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:53.005608  896760 cri.go:89] found id: ""
	I1208 00:38:53.005635  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.005644  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:53.005652  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:53.005729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:53.033758  896760 cri.go:89] found id: ""
	I1208 00:38:53.033773  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.033784  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:53.033789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:53.033848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:53.063554  896760 cri.go:89] found id: ""
	I1208 00:38:53.063568  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.063575  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:53.063581  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:53.063644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:53.093217  896760 cri.go:89] found id: ""
	I1208 00:38:53.093233  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.093241  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:53.093249  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:53.093260  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:53.152571  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:53.152591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:53.167769  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:53.167785  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:53.232572  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:53.232583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:53.232604  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:53.301625  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:53.301653  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:55.831231  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.843576  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:55.843680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:55.878177  896760 cri.go:89] found id: ""
	I1208 00:38:55.878191  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.878198  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:55.878203  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:55.878260  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:55.904640  896760 cri.go:89] found id: ""
	I1208 00:38:55.904660  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.904667  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:55.904672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:55.904729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:55.930143  896760 cri.go:89] found id: ""
	I1208 00:38:55.930156  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.930163  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:55.930168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:55.930223  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:55.954696  896760 cri.go:89] found id: ""
	I1208 00:38:55.954710  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.954717  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:55.954723  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:55.954779  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:55.979424  896760 cri.go:89] found id: ""
	I1208 00:38:55.979438  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.979445  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:55.979453  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:55.979513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:56.010864  896760 cri.go:89] found id: ""
	I1208 00:38:56.010879  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.010887  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:56.010893  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:56.010959  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:56.038141  896760 cri.go:89] found id: ""
	I1208 00:38:56.038155  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.038163  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:56.038171  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:56.038183  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:56.105328  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:56.105339  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:56.105350  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:56.167859  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:56.167878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:56.195618  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:56.195634  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:56.254386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:56.254406  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:58.770585  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.780949  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:58.781010  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:58.804624  896760 cri.go:89] found id: ""
	I1208 00:38:58.804638  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.804645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:58.804651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:58.804710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:58.830257  896760 cri.go:89] found id: ""
	I1208 00:38:58.830271  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.830278  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:58.830283  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:58.830341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:58.870359  896760 cri.go:89] found id: ""
	I1208 00:38:58.870383  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.870390  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:58.870396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:58.870501  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:58.897347  896760 cri.go:89] found id: ""
	I1208 00:38:58.897361  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.897368  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:58.897373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:58.897431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:58.927474  896760 cri.go:89] found id: ""
	I1208 00:38:58.927488  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.927496  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:58.927501  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:58.927563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:58.953358  896760 cri.go:89] found id: ""
	I1208 00:38:58.953372  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.953380  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:58.953386  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:58.953443  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:58.978092  896760 cri.go:89] found id: ""
	I1208 00:38:58.978107  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.978116  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:58.978124  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:58.978134  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:59.008505  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:59.008524  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:59.067065  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:59.067095  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:59.081827  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:59.081843  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:59.148151  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:59.148161  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:59.148172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:01.713848  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.724264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:01.724326  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:01.752237  896760 cri.go:89] found id: ""
	I1208 00:39:01.752251  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.752258  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:01.752264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:01.752325  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:01.778116  896760 cri.go:89] found id: ""
	I1208 00:39:01.778129  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.778136  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:01.778141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:01.778213  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:01.807711  896760 cri.go:89] found id: ""
	I1208 00:39:01.807725  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.807731  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:01.807737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:01.807798  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:01.836797  896760 cri.go:89] found id: ""
	I1208 00:39:01.836812  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.836820  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:01.836826  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:01.836884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:01.863221  896760 cri.go:89] found id: ""
	I1208 00:39:01.863235  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.863242  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:01.863247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:01.863307  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:01.902460  896760 cri.go:89] found id: ""
	I1208 00:39:01.902476  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.902483  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:01.902489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:01.902558  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:01.930861  896760 cri.go:89] found id: ""
	I1208 00:39:01.930874  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.930882  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:01.930889  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:01.930900  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:01.987172  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:01.987190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:02.006975  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:02.006993  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:02.075975  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:02.076005  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:02.076017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:02.142423  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:02.142453  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:04.675643  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.688662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:04.688743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:04.716050  896760 cri.go:89] found id: ""
	I1208 00:39:04.716065  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.716072  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:04.716078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:04.716141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:04.742668  896760 cri.go:89] found id: ""
	I1208 00:39:04.742682  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.742690  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:04.742695  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:04.742756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:04.769375  896760 cri.go:89] found id: ""
	I1208 00:39:04.769388  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.769396  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:04.769401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:04.769459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:04.795270  896760 cri.go:89] found id: ""
	I1208 00:39:04.795284  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.795291  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:04.795297  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:04.795354  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:04.822245  896760 cri.go:89] found id: ""
	I1208 00:39:04.822258  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.822265  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:04.822271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:04.822330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:04.859401  896760 cri.go:89] found id: ""
	I1208 00:39:04.859414  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.859422  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:04.859428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:04.859486  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:04.896707  896760 cri.go:89] found id: ""
	I1208 00:39:04.896721  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.896728  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:04.896736  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:04.896745  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:04.967586  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:04.967603  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:04.983057  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:04.983080  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:05.060799  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:05.060821  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:05.060832  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:05.123856  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:05.123875  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:07.653529  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.664109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:07.664168  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:07.689362  896760 cri.go:89] found id: ""
	I1208 00:39:07.689376  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.689383  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:07.689388  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:07.689448  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:07.714707  896760 cri.go:89] found id: ""
	I1208 00:39:07.714722  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.714729  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:07.714734  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:07.714792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:07.740750  896760 cri.go:89] found id: ""
	I1208 00:39:07.740765  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.740771  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:07.740777  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:07.740834  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:07.765622  896760 cri.go:89] found id: ""
	I1208 00:39:07.765637  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.765645  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:07.765650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:07.765714  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:07.790729  896760 cri.go:89] found id: ""
	I1208 00:39:07.790744  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.790751  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:07.790756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:07.790824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:07.821100  896760 cri.go:89] found id: ""
	I1208 00:39:07.821114  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.821122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:07.821127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:07.821185  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:07.855011  896760 cri.go:89] found id: ""
	I1208 00:39:07.855025  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.855042  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:07.855050  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:07.855061  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:07.916163  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:07.916184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:07.931656  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:07.931672  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:08.007997  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:08.008026  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:08.008039  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:08.079922  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:08.079944  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:10.614429  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.625953  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:10.626015  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:10.651687  896760 cri.go:89] found id: ""
	I1208 00:39:10.651701  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.651708  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:10.651714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:10.651774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:10.676412  896760 cri.go:89] found id: ""
	I1208 00:39:10.676426  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.676433  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:10.676439  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:10.676507  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:10.705971  896760 cri.go:89] found id: ""
	I1208 00:39:10.705986  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.705992  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:10.705998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:10.706058  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:10.730598  896760 cri.go:89] found id: ""
	I1208 00:39:10.730621  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.730629  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:10.730634  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:10.730695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:10.756672  896760 cri.go:89] found id: ""
	I1208 00:39:10.756694  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.756702  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:10.756707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:10.756770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:10.786647  896760 cri.go:89] found id: ""
	I1208 00:39:10.786671  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.786679  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:10.786685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:10.786753  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:10.813024  896760 cri.go:89] found id: ""
	I1208 00:39:10.813037  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.813045  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:10.813063  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:10.813074  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:10.870687  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:10.870718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:10.887434  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:10.887451  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:10.955043  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:10.955053  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:10.955064  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:11.016735  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:11.016756  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:13.547727  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.558158  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:13.558216  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:13.583025  896760 cri.go:89] found id: ""
	I1208 00:39:13.583045  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.583053  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:13.583058  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:13.583119  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:13.608731  896760 cri.go:89] found id: ""
	I1208 00:39:13.608744  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.608751  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:13.608756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:13.608815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:13.634817  896760 cri.go:89] found id: ""
	I1208 00:39:13.634831  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.634838  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:13.634843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:13.634905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:13.659255  896760 cri.go:89] found id: ""
	I1208 00:39:13.659269  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.659276  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:13.659281  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:13.659341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:13.683853  896760 cri.go:89] found id: ""
	I1208 00:39:13.683867  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.683882  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:13.683888  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:13.683949  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:13.708780  896760 cri.go:89] found id: ""
	I1208 00:39:13.708795  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.708802  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:13.708807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:13.708864  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:13.734678  896760 cri.go:89] found id: ""
	I1208 00:39:13.734692  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.734699  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:13.734708  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:13.734718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:13.790576  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:13.790597  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:13.805551  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:13.805567  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:13.884689  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:13.884710  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:13.884721  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:13.954356  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:13.954379  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:16.485706  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.496517  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:16.496577  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:16.526348  896760 cri.go:89] found id: ""
	I1208 00:39:16.526363  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.526370  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:16.526376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:16.526459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:16.551935  896760 cri.go:89] found id: ""
	I1208 00:39:16.551949  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.551962  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:16.551968  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:16.552028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:16.576320  896760 cri.go:89] found id: ""
	I1208 00:39:16.576333  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.576340  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:16.576345  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:16.576403  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:16.605756  896760 cri.go:89] found id: ""
	I1208 00:39:16.605770  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.605777  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:16.605783  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:16.605839  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:16.632121  896760 cri.go:89] found id: ""
	I1208 00:39:16.632134  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.632141  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:16.632146  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:16.632203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:16.660423  896760 cri.go:89] found id: ""
	I1208 00:39:16.660437  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.660444  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:16.660450  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:16.660531  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:16.685576  896760 cri.go:89] found id: ""
	I1208 00:39:16.685595  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.685602  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:16.685610  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:16.685620  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:16.740694  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:16.740712  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:16.755790  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:16.755806  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:16.821132  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:16.821152  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:16.821164  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:16.887057  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:16.887076  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:19.418598  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.428681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:19.428748  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:19.455940  896760 cri.go:89] found id: ""
	I1208 00:39:19.455953  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.455961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:19.455966  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:19.456027  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:19.482046  896760 cri.go:89] found id: ""
	I1208 00:39:19.482060  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.482067  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:19.482073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:19.482130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:19.510706  896760 cri.go:89] found id: ""
	I1208 00:39:19.510720  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.510728  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:19.510733  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:19.510792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:19.535505  896760 cri.go:89] found id: ""
	I1208 00:39:19.535520  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.535528  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:19.535533  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:19.535601  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:19.560234  896760 cri.go:89] found id: ""
	I1208 00:39:19.560248  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.560255  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:19.560261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:19.560328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:19.584606  896760 cri.go:89] found id: ""
	I1208 00:39:19.584621  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.584629  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:19.584637  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:19.584695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:19.613195  896760 cri.go:89] found id: ""
	I1208 00:39:19.613226  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.613234  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:19.613242  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:19.613252  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:19.670165  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:19.670184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:19.685327  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:19.685351  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:19.749894  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:19.749914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:19.749928  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:19.812758  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:19.812779  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:22.352520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.362719  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:22.362790  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:22.387649  896760 cri.go:89] found id: ""
	I1208 00:39:22.387662  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.387669  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:22.387675  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:22.387734  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:22.416444  896760 cri.go:89] found id: ""
	I1208 00:39:22.416458  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.416465  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:22.416470  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:22.416538  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:22.442291  896760 cri.go:89] found id: ""
	I1208 00:39:22.442305  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.442312  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:22.442317  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:22.442377  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:22.466919  896760 cri.go:89] found id: ""
	I1208 00:39:22.466933  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.466940  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:22.466945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:22.467011  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:22.492435  896760 cri.go:89] found id: ""
	I1208 00:39:22.492449  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.492456  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:22.492461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:22.492526  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:22.518157  896760 cri.go:89] found id: ""
	I1208 00:39:22.518183  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.518190  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:22.518197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:22.518266  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:22.544341  896760 cri.go:89] found id: ""
	I1208 00:39:22.544356  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.544363  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:22.544371  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:22.544389  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:22.601655  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:22.601676  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:22.617670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:22.617700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:22.686714  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:22.686725  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:22.686736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:22.749600  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:22.749621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.281783  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.292163  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:25.292227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:25.316234  896760 cri.go:89] found id: ""
	I1208 00:39:25.316249  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.316257  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:25.316262  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:25.316330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:25.350433  896760 cri.go:89] found id: ""
	I1208 00:39:25.350478  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.350485  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:25.350491  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:25.350562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:25.376982  896760 cri.go:89] found id: ""
	I1208 00:39:25.376996  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.377004  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:25.377009  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:25.377076  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:25.402484  896760 cri.go:89] found id: ""
	I1208 00:39:25.402499  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.402506  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:25.402511  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:25.402580  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:25.429596  896760 cri.go:89] found id: ""
	I1208 00:39:25.429611  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.429618  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:25.429624  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:25.429692  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:25.455037  896760 cri.go:89] found id: ""
	I1208 00:39:25.455051  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.455059  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:25.455064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:25.455130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:25.484391  896760 cri.go:89] found id: ""
	I1208 00:39:25.484404  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.484412  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:25.484420  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:25.484430  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.512262  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:25.512282  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:25.569524  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:25.569543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:25.584301  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:25.584316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:25.650571  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:25.650583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:25.650594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.218586  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.229069  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:28.229127  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:28.254473  896760 cri.go:89] found id: ""
	I1208 00:39:28.254487  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.254494  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:28.254499  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:28.254563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:28.283388  896760 cri.go:89] found id: ""
	I1208 00:39:28.283403  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.283410  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:28.283418  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:28.283475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:28.310968  896760 cri.go:89] found id: ""
	I1208 00:39:28.310983  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.310990  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:28.310995  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:28.311061  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:28.336049  896760 cri.go:89] found id: ""
	I1208 00:39:28.336064  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.336072  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:28.336078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:28.336141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:28.360451  896760 cri.go:89] found id: ""
	I1208 00:39:28.360464  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.360470  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:28.360475  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:28.360542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:28.385117  896760 cri.go:89] found id: ""
	I1208 00:39:28.385131  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.385138  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:28.385143  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:28.385196  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:28.408915  896760 cri.go:89] found id: ""
	I1208 00:39:28.408928  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.408935  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:28.408943  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:28.408953  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:28.423316  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:28.423332  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:28.486812  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:28.486823  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:28.486833  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.553325  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:28.553344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:28.582011  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:28.582027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.143204  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.154196  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:31.154264  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:31.181624  896760 cri.go:89] found id: ""
	I1208 00:39:31.181638  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.181645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:31.181650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:31.181713  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:31.207658  896760 cri.go:89] found id: ""
	I1208 00:39:31.207672  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.207679  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:31.207684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:31.207742  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:31.233323  896760 cri.go:89] found id: ""
	I1208 00:39:31.233338  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.233345  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:31.233351  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:31.233411  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:31.258320  896760 cri.go:89] found id: ""
	I1208 00:39:31.258335  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.258342  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:31.258347  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:31.258406  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:31.283846  896760 cri.go:89] found id: ""
	I1208 00:39:31.283860  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.283868  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:31.283873  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:31.283931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:31.310064  896760 cri.go:89] found id: ""
	I1208 00:39:31.310079  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.310086  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:31.310091  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:31.310149  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:31.337328  896760 cri.go:89] found id: ""
	I1208 00:39:31.337350  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.337358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:31.337367  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:31.337377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.392950  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:31.392969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:31.407922  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:31.407939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:31.474904  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:31.474915  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:31.474925  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:31.536814  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:31.536834  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:34.069082  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:34.079471  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:34.079532  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:34.121832  896760 cri.go:89] found id: ""
	I1208 00:39:34.121846  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.121853  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:34.121859  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:34.121923  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:34.151527  896760 cri.go:89] found id: ""
	I1208 00:39:34.151541  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.151548  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:34.151553  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:34.151613  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:34.179098  896760 cri.go:89] found id: ""
	I1208 00:39:34.179113  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.179121  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:34.179126  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:34.179184  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:34.209529  896760 cri.go:89] found id: ""
	I1208 00:39:34.209548  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.209563  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:34.209568  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:34.209655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:34.238235  896760 cri.go:89] found id: ""
	I1208 00:39:34.238249  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.238256  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:34.238261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:34.238318  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:34.263739  896760 cri.go:89] found id: ""
	I1208 00:39:34.263752  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.263760  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:34.263765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:34.263838  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:34.293315  896760 cri.go:89] found id: ""
	I1208 00:39:34.293330  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.293337  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:34.293345  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:34.293356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:34.348849  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:34.348873  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:34.363941  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:34.363958  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:34.430475  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:34.430487  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:34.430501  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:34.492396  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:34.492415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.025457  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:37.036130  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:37.036201  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:37.063581  896760 cri.go:89] found id: ""
	I1208 00:39:37.063595  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.063602  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:37.063609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:37.063672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:37.088301  896760 cri.go:89] found id: ""
	I1208 00:39:37.088320  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.088328  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:37.088334  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:37.088395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:37.124388  896760 cri.go:89] found id: ""
	I1208 00:39:37.124402  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.124409  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:37.124417  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:37.124474  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:37.154807  896760 cri.go:89] found id: ""
	I1208 00:39:37.154821  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.154838  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:37.154843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:37.154912  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:37.180191  896760 cri.go:89] found id: ""
	I1208 00:39:37.180204  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.180212  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:37.180217  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:37.180279  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:37.205379  896760 cri.go:89] found id: ""
	I1208 00:39:37.205394  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.205402  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:37.205408  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:37.205487  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:37.233231  896760 cri.go:89] found id: ""
	I1208 00:39:37.233245  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.233264  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:37.233271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:37.233280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:37.297690  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:37.297709  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.325655  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:37.325682  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:37.385822  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:37.385841  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:37.400660  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:37.400685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:37.463113  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:39.963375  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:39.974152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:39.974214  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:40.011461  896760 cri.go:89] found id: ""
	I1208 00:39:40.011477  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.011485  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:40.011492  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:40.011588  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:40.050774  896760 cri.go:89] found id: ""
	I1208 00:39:40.050789  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.050810  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:40.050819  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:40.050895  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:40.078693  896760 cri.go:89] found id: ""
	I1208 00:39:40.078712  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.078737  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:40.078743  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:40.078832  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:40.119774  896760 cri.go:89] found id: ""
	I1208 00:39:40.119787  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.119806  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:40.119812  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:40.119870  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:40.150656  896760 cri.go:89] found id: ""
	I1208 00:39:40.150682  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.150689  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:40.150694  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:40.150761  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:40.182218  896760 cri.go:89] found id: ""
	I1208 00:39:40.182233  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.182247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:40.182253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:40.182329  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:40.212756  896760 cri.go:89] found id: ""
	I1208 00:39:40.212770  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.212778  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:40.212786  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:40.212796  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:40.271111  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:40.271135  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:40.286128  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:40.286144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:40.350612  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:40.350622  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:40.350633  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:40.413198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:40.413217  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:42.941473  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:42.951830  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:42.951896  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:42.977279  896760 cri.go:89] found id: ""
	I1208 00:39:42.977294  896760 logs.go:282] 0 containers: []
	W1208 00:39:42.977303  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:42.977309  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:42.977378  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:43.005862  896760 cri.go:89] found id: ""
	I1208 00:39:43.005878  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.005886  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:43.005891  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:43.006072  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:43.033594  896760 cri.go:89] found id: ""
	I1208 00:39:43.033609  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.033616  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:43.033621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:43.033700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:43.058971  896760 cri.go:89] found id: ""
	I1208 00:39:43.058986  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.058993  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:43.058999  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:43.059056  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:43.084568  896760 cri.go:89] found id: ""
	I1208 00:39:43.084582  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.084590  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:43.084595  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:43.084657  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:43.121795  896760 cri.go:89] found id: ""
	I1208 00:39:43.121810  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.121818  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:43.121823  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:43.121884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:43.151337  896760 cri.go:89] found id: ""
	I1208 00:39:43.151351  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.151358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:43.151365  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:43.151375  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:43.212011  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:43.212032  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:43.227510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:43.227526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:43.293650  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:43.293672  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:43.293684  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:43.355405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:43.355425  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:45.883665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:45.894220  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:45.894287  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:45.919120  896760 cri.go:89] found id: ""
	I1208 00:39:45.919134  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.919141  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:45.919147  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:45.919202  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:45.944078  896760 cri.go:89] found id: ""
	I1208 00:39:45.944092  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.944100  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:45.944105  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:45.944171  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:45.969419  896760 cri.go:89] found id: ""
	I1208 00:39:45.969433  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.969440  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:45.969445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:45.969504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:45.999721  896760 cri.go:89] found id: ""
	I1208 00:39:45.999736  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.999744  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:45.999749  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:45.999807  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:46.027671  896760 cri.go:89] found id: ""
	I1208 00:39:46.027685  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.027697  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:46.027705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:46.027763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:46.053035  896760 cri.go:89] found id: ""
	I1208 00:39:46.053050  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.053058  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:46.053064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:46.053124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:46.077745  896760 cri.go:89] found id: ""
	I1208 00:39:46.077759  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.077767  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:46.077775  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:46.077786  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:46.137068  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:46.137086  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:46.153304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:46.153320  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:46.226313  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:46.226334  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:46.226345  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:46.290116  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:46.290137  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:48.819903  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:48.830265  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:48.830328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:48.855396  896760 cri.go:89] found id: ""
	I1208 00:39:48.855411  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.855418  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:48.855423  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:48.855483  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:48.880269  896760 cri.go:89] found id: ""
	I1208 00:39:48.880282  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.880289  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:48.880294  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:48.880353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:48.904626  896760 cri.go:89] found id: ""
	I1208 00:39:48.904641  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.904648  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:48.904653  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:48.904715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:48.930484  896760 cri.go:89] found id: ""
	I1208 00:39:48.930511  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.930519  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:48.930528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:48.930609  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:48.956159  896760 cri.go:89] found id: ""
	I1208 00:39:48.956173  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.956180  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:48.956185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:48.956243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:48.984643  896760 cri.go:89] found id: ""
	I1208 00:39:48.984657  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.984664  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:48.984670  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:48.984737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:49.012694  896760 cri.go:89] found id: ""
	I1208 00:39:49.012708  896760 logs.go:282] 0 containers: []
	W1208 00:39:49.012716  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:49.012724  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:49.012736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:49.042898  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:49.042915  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:49.099079  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:49.099099  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:49.118877  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:49.118895  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:49.190253  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:49.190263  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:49.190273  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:51.751406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:51.761914  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:51.761973  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:51.788354  896760 cri.go:89] found id: ""
	I1208 00:39:51.788367  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.788375  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:51.788381  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:51.788441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:51.816636  896760 cri.go:89] found id: ""
	I1208 00:39:51.816651  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.816658  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:51.816664  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:51.816735  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:51.842160  896760 cri.go:89] found id: ""
	I1208 00:39:51.842174  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.842181  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:51.842187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:51.842249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:51.867343  896760 cri.go:89] found id: ""
	I1208 00:39:51.867358  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.867365  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:51.867371  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:51.867432  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:51.891589  896760 cri.go:89] found id: ""
	I1208 00:39:51.891604  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.891611  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:51.891616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:51.891681  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:51.915982  896760 cri.go:89] found id: ""
	I1208 00:39:51.915997  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.916016  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:51.916023  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:51.916081  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:51.940386  896760 cri.go:89] found id: ""
	I1208 00:39:51.940399  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.940406  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:51.940414  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:51.940424  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:51.995386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:51.995404  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:52.011670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:52.011689  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:52.085018  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:52.085029  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:52.085041  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:52.155066  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:52.155085  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.698041  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:54.708958  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:54.709024  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:54.734899  896760 cri.go:89] found id: ""
	I1208 00:39:54.734913  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.734921  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:54.734926  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:54.734985  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:54.761966  896760 cri.go:89] found id: ""
	I1208 00:39:54.761981  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.761988  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:54.761993  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:54.762052  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:54.787505  896760 cri.go:89] found id: ""
	I1208 00:39:54.787519  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.787526  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:54.787532  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:54.787595  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:54.813125  896760 cri.go:89] found id: ""
	I1208 00:39:54.813139  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.813147  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:54.813152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:54.813212  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:54.840170  896760 cri.go:89] found id: ""
	I1208 00:39:54.840185  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.840193  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:54.840198  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:54.840269  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:54.865780  896760 cri.go:89] found id: ""
	I1208 00:39:54.865794  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.865801  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:54.865807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:54.865867  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:54.890971  896760 cri.go:89] found id: ""
	I1208 00:39:54.890992  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.891000  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:54.891007  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:54.891017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:54.953695  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:54.953715  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.985753  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:54.985770  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:55.051156  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:55.051176  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:55.066530  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:55.066547  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:55.148075  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:57.649726  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:57.660051  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:57.660109  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:57.685992  896760 cri.go:89] found id: ""
	I1208 00:39:57.686008  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.686015  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:57.686022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:57.686165  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:57.711195  896760 cri.go:89] found id: ""
	I1208 00:39:57.711209  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.711216  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:57.711224  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:57.711285  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:57.735850  896760 cri.go:89] found id: ""
	I1208 00:39:57.735864  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.735871  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:57.735877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:57.735936  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:57.761018  896760 cri.go:89] found id: ""
	I1208 00:39:57.761032  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.761040  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:57.761045  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:57.761110  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:57.787523  896760 cri.go:89] found id: ""
	I1208 00:39:57.787537  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.787544  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:57.787550  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:57.787607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:57.813621  896760 cri.go:89] found id: ""
	I1208 00:39:57.813641  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.813648  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:57.813654  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:57.813717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:57.837687  896760 cri.go:89] found id: ""
	I1208 00:39:57.837700  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.837707  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:57.837715  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:57.837725  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:57.901756  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:57.901780  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:57.931916  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:57.931943  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:57.989769  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:57.989791  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:58.005304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:58.005324  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:58.084868  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:00.590352  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:00.608394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:00.608458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:00.646794  896760 cri.go:89] found id: ""
	I1208 00:40:00.646810  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.646818  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:00.646825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:00.646893  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:00.722151  896760 cri.go:89] found id: ""
	I1208 00:40:00.722167  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.722175  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:00.722180  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:00.722252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:00.751689  896760 cri.go:89] found id: ""
	I1208 00:40:00.751705  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.751713  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:00.751720  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:00.751795  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:00.782551  896760 cri.go:89] found id: ""
	I1208 00:40:00.782577  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.782586  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:00.782593  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:00.782674  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:00.813259  896760 cri.go:89] found id: ""
	I1208 00:40:00.813275  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.813282  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:00.813287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:00.813353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:00.843171  896760 cri.go:89] found id: ""
	I1208 00:40:00.843193  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.843201  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:00.843206  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:00.843270  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:00.872240  896760 cri.go:89] found id: ""
	I1208 00:40:00.872266  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.872275  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:00.872283  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:00.872297  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:00.933096  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:00.933116  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:00.949661  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:00.949685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:01.022088  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:01.022099  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:01.022112  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:01.087987  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:01.088007  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.623088  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:03.637929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:03.637992  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:03.667258  896760 cri.go:89] found id: ""
	I1208 00:40:03.667272  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.667280  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:03.667286  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:03.667347  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:03.704022  896760 cri.go:89] found id: ""
	I1208 00:40:03.704035  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.704042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:03.704048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:03.704115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:03.733401  896760 cri.go:89] found id: ""
	I1208 00:40:03.733416  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.733423  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:03.733428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:03.733489  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:03.760028  896760 cri.go:89] found id: ""
	I1208 00:40:03.760042  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.760049  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:03.760054  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:03.760113  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:03.784849  896760 cri.go:89] found id: ""
	I1208 00:40:03.784864  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.784871  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:03.784877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:03.784934  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:03.809615  896760 cri.go:89] found id: ""
	I1208 00:40:03.809629  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.809636  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:03.809642  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:03.809700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:03.834857  896760 cri.go:89] found id: ""
	I1208 00:40:03.834872  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.834879  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:03.834886  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:03.834896  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:03.899301  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:03.899312  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:03.899330  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:03.961403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:03.961422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.990248  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:03.990265  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:04.049257  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:04.049280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.564731  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:06.575277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:06.575339  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:06.603640  896760 cri.go:89] found id: ""
	I1208 00:40:06.603653  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.603662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:06.603668  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:06.603727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:06.632743  896760 cri.go:89] found id: ""
	I1208 00:40:06.632757  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.632764  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:06.632769  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:06.632830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:06.661586  896760 cri.go:89] found id: ""
	I1208 00:40:06.661600  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.661608  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:06.661613  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:06.661675  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:06.686811  896760 cri.go:89] found id: ""
	I1208 00:40:06.686833  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.686840  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:06.686845  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:06.686905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:06.712624  896760 cri.go:89] found id: ""
	I1208 00:40:06.712639  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.712646  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:06.712651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:06.712710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:06.737865  896760 cri.go:89] found id: ""
	I1208 00:40:06.737878  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.737898  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:06.737903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:06.737971  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:06.763555  896760 cri.go:89] found id: ""
	I1208 00:40:06.763569  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.763576  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:06.763583  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:06.763594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:06.820256  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:06.820275  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.835590  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:06.835606  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:06.900244  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:06.900256  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:06.900269  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:06.964553  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:06.964573  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.497887  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:09.511442  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:09.511513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:09.537551  896760 cri.go:89] found id: ""
	I1208 00:40:09.537566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.537573  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:09.537579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:09.537639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:09.564387  896760 cri.go:89] found id: ""
	I1208 00:40:09.564400  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.564408  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:09.564412  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:09.564471  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:09.592551  896760 cri.go:89] found id: ""
	I1208 00:40:09.592566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.592573  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:09.592579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:09.592638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:09.633536  896760 cri.go:89] found id: ""
	I1208 00:40:09.633553  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.633564  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:09.633572  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:09.633644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:09.661685  896760 cri.go:89] found id: ""
	I1208 00:40:09.661700  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.661706  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:09.661711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:09.661773  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:09.689367  896760 cri.go:89] found id: ""
	I1208 00:40:09.689382  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.689390  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:09.689396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:09.689461  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:09.715001  896760 cri.go:89] found id: ""
	I1208 00:40:09.715025  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.715033  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:09.715041  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:09.715052  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.743922  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:09.743939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:09.801833  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:09.801852  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:09.817182  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:09.817199  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:09.885006  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:09.885017  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:09.885028  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.453176  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:12.463998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:12.464059  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:12.489947  896760 cri.go:89] found id: ""
	I1208 00:40:12.489961  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.489968  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:12.489974  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:12.490053  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:12.517571  896760 cri.go:89] found id: ""
	I1208 00:40:12.517586  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.517594  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:12.517601  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:12.517680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:12.549649  896760 cri.go:89] found id: ""
	I1208 00:40:12.549671  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.549679  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:12.549685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:12.549764  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:12.575870  896760 cri.go:89] found id: ""
	I1208 00:40:12.575891  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.575899  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:12.575903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:12.575975  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:12.615650  896760 cri.go:89] found id: ""
	I1208 00:40:12.615664  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.615672  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:12.615677  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:12.615745  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:12.644432  896760 cri.go:89] found id: ""
	I1208 00:40:12.644446  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.644454  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:12.644460  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:12.644536  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:12.671471  896760 cri.go:89] found id: ""
	I1208 00:40:12.671485  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.671492  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:12.671499  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:12.671510  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:12.728175  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:12.728195  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:12.743959  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:12.743975  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:12.816570  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:12.816580  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:12.816591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.879403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:12.879423  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:15.414366  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:15.424841  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:15.424901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:15.450991  896760 cri.go:89] found id: ""
	I1208 00:40:15.451005  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.451012  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:15.451017  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:15.451078  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:15.477340  896760 cri.go:89] found id: ""
	I1208 00:40:15.477354  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.477361  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:15.477366  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:15.477424  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:15.503035  896760 cri.go:89] found id: ""
	I1208 00:40:15.503048  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.503055  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:15.503060  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:15.503125  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:15.527771  896760 cri.go:89] found id: ""
	I1208 00:40:15.527787  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.527794  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:15.527798  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:15.527856  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:15.556600  896760 cri.go:89] found id: ""
	I1208 00:40:15.556627  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.556634  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:15.556639  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:15.556710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:15.582706  896760 cri.go:89] found id: ""
	I1208 00:40:15.582721  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.582728  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:15.582737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:15.582821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:15.628093  896760 cri.go:89] found id: ""
	I1208 00:40:15.628114  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.628121  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:15.628129  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:15.628144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:15.691996  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:15.692026  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:15.707812  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:15.707830  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:15.773396  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:15.773407  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:15.773418  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:15.840937  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:15.840957  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:18.375079  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:18.385866  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:18.385931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:18.412581  896760 cri.go:89] found id: ""
	I1208 00:40:18.412596  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.412603  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:18.412609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:18.412672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:18.443837  896760 cri.go:89] found id: ""
	I1208 00:40:18.443863  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.443871  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:18.443876  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:18.443950  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:18.470522  896760 cri.go:89] found id: ""
	I1208 00:40:18.470549  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.470557  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:18.470565  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:18.470639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:18.500112  896760 cri.go:89] found id: ""
	I1208 00:40:18.500127  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.500136  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:18.500141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:18.500203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:18.528643  896760 cri.go:89] found id: ""
	I1208 00:40:18.528657  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.528666  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:18.528672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:18.528740  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:18.556708  896760 cri.go:89] found id: ""
	I1208 00:40:18.556722  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.556729  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:18.556735  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:18.556799  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:18.586255  896760 cri.go:89] found id: ""
	I1208 00:40:18.586270  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.586277  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:18.586285  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:18.586295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:18.651954  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:18.651974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:18.668271  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:18.668288  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:18.735458  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:18.735469  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:18.735481  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:18.797791  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:18.797811  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.328343  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:21.339006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:21.339068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:21.365940  896760 cri.go:89] found id: ""
	I1208 00:40:21.365954  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.365961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:21.365967  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:21.366028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:21.393056  896760 cri.go:89] found id: ""
	I1208 00:40:21.393071  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.393078  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:21.393083  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:21.393147  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:21.418602  896760 cri.go:89] found id: ""
	I1208 00:40:21.418616  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.418624  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:21.418630  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:21.418689  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:21.444947  896760 cri.go:89] found id: ""
	I1208 00:40:21.444963  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.444970  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:21.444976  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:21.445037  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:21.486428  896760 cri.go:89] found id: ""
	I1208 00:40:21.486461  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.486469  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:21.486476  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:21.486537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:21.516432  896760 cri.go:89] found id: ""
	I1208 00:40:21.516448  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.516455  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:21.516461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:21.516527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:21.542473  896760 cri.go:89] found id: ""
	I1208 00:40:21.542488  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.542501  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:21.542510  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:21.542521  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:21.558088  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:21.558105  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:21.646839  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:21.646850  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:21.646861  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:21.711182  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:21.711203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.739373  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:21.739391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.296477  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:24.307018  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:24.307079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:24.333480  896760 cri.go:89] found id: ""
	I1208 00:40:24.333502  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.333521  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:24.333526  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:24.333587  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:24.359023  896760 cri.go:89] found id: ""
	I1208 00:40:24.359037  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.359044  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:24.359049  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:24.359118  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:24.384337  896760 cri.go:89] found id: ""
	I1208 00:40:24.384351  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.384358  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:24.384363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:24.384425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:24.409687  896760 cri.go:89] found id: ""
	I1208 00:40:24.409702  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.409709  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:24.409714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:24.409774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:24.434605  896760 cri.go:89] found id: ""
	I1208 00:40:24.434620  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.434627  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:24.434633  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:24.434690  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:24.464542  896760 cri.go:89] found id: ""
	I1208 00:40:24.464556  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.464569  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:24.464575  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:24.464638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:24.489131  896760 cri.go:89] found id: ""
	I1208 00:40:24.489145  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.489152  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:24.489159  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:24.489170  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.544278  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:24.544298  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:24.560095  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:24.560152  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:24.637902  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:24.637914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:24.637924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:24.706243  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:24.706262  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.237246  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:27.247681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:27.247744  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:27.272827  896760 cri.go:89] found id: ""
	I1208 00:40:27.272841  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.272848  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:27.272854  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:27.272917  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:27.298021  896760 cri.go:89] found id: ""
	I1208 00:40:27.298035  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.298042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:27.298048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:27.298115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:27.322943  896760 cri.go:89] found id: ""
	I1208 00:40:27.322975  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.322983  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:27.322989  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:27.323049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:27.348507  896760 cri.go:89] found id: ""
	I1208 00:40:27.348522  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.348530  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:27.348535  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:27.348604  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:27.373824  896760 cri.go:89] found id: ""
	I1208 00:40:27.373838  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.373846  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:27.373851  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:27.373911  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:27.399388  896760 cri.go:89] found id: ""
	I1208 00:40:27.399402  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.399409  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:27.399415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:27.399481  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:27.427571  896760 cri.go:89] found id: ""
	I1208 00:40:27.427596  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.427604  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:27.427612  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:27.427621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:27.492713  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:27.492731  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.522269  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:27.522295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:27.582384  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:27.582402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:27.602834  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:27.602850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:27.689958  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:30.190338  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:30.201839  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:30.201909  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:30.228924  896760 cri.go:89] found id: ""
	I1208 00:40:30.228939  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.228956  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:30.228963  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:30.229026  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:30.255337  896760 cri.go:89] found id: ""
	I1208 00:40:30.255351  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.255358  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:30.255363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:30.255425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:30.281566  896760 cri.go:89] found id: ""
	I1208 00:40:30.281581  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.281588  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:30.281594  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:30.281655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:30.308175  896760 cri.go:89] found id: ""
	I1208 00:40:30.308189  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.308197  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:30.308202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:30.308282  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:30.336203  896760 cri.go:89] found id: ""
	I1208 00:40:30.336218  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.336226  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:30.336241  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:30.336302  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:30.368832  896760 cri.go:89] found id: ""
	I1208 00:40:30.368847  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.368855  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:30.368860  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:30.368940  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:30.396840  896760 cri.go:89] found id: ""
	I1208 00:40:30.396855  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.396862  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:30.396870  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:30.396880  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:30.458293  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:30.458313  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:30.489792  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:30.489807  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:30.546970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:30.546989  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:30.561949  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:30.561969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:30.648665  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.148954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:33.159678  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:33.159739  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:33.188692  896760 cri.go:89] found id: ""
	I1208 00:40:33.188707  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.188725  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:33.188731  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:33.188815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:33.214527  896760 cri.go:89] found id: ""
	I1208 00:40:33.214542  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.214550  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:33.214555  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:33.214614  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:33.241307  896760 cri.go:89] found id: ""
	I1208 00:40:33.241323  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.241331  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:33.241336  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:33.241395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:33.267242  896760 cri.go:89] found id: ""
	I1208 00:40:33.267257  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.267265  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:33.267271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:33.267331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:33.293623  896760 cri.go:89] found id: ""
	I1208 00:40:33.293637  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.293645  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:33.293650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:33.293710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:33.319375  896760 cri.go:89] found id: ""
	I1208 00:40:33.319388  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.319395  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:33.319401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:33.319477  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:33.345164  896760 cri.go:89] found id: ""
	I1208 00:40:33.345178  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.345186  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:33.345193  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:33.345203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:33.402766  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:33.402783  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:33.417559  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:33.417576  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:33.484831  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.484841  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:33.484851  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:33.553499  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:33.553527  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.087539  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:36.098484  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:36.098549  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:36.123061  896760 cri.go:89] found id: ""
	I1208 00:40:36.123075  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.123083  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:36.123089  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:36.123150  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:36.152786  896760 cri.go:89] found id: ""
	I1208 00:40:36.152800  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.152807  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:36.152813  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:36.152874  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:36.179122  896760 cri.go:89] found id: ""
	I1208 00:40:36.179137  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.179144  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:36.179150  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:36.179211  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:36.205226  896760 cri.go:89] found id: ""
	I1208 00:40:36.205239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.205247  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:36.205253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:36.205311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:36.231018  896760 cri.go:89] found id: ""
	I1208 00:40:36.231033  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.231040  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:36.231046  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:36.231104  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:36.257226  896760 cri.go:89] found id: ""
	I1208 00:40:36.257239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.257247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:36.257253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:36.257312  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:36.282378  896760 cri.go:89] found id: ""
	I1208 00:40:36.282395  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.282402  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:36.282411  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:36.282422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:36.297365  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:36.297381  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:36.361334  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:36.361345  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:36.361356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:36.425983  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:36.426003  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.458376  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:36.458391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.019300  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:39.030277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:39.030337  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:39.059011  896760 cri.go:89] found id: ""
	I1208 00:40:39.059026  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.059033  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:39.059039  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:39.059099  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:39.084787  896760 cri.go:89] found id: ""
	I1208 00:40:39.084802  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.084809  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:39.084815  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:39.084879  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:39.111166  896760 cri.go:89] found id: ""
	I1208 00:40:39.111179  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.111186  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:39.111192  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:39.111252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:39.140388  896760 cri.go:89] found id: ""
	I1208 00:40:39.140403  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.140410  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:39.140415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:39.140475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:39.165040  896760 cri.go:89] found id: ""
	I1208 00:40:39.165054  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.165062  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:39.165067  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:39.165130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:39.191099  896760 cri.go:89] found id: ""
	I1208 00:40:39.191114  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.191122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:39.191127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:39.191187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:39.215889  896760 cri.go:89] found id: ""
	I1208 00:40:39.215903  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.215910  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:39.215918  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:39.215934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.279738  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:39.279760  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:39.295091  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:39.295108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:39.363341  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:39.363363  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:39.363373  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:39.428022  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:39.428043  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.960665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:41.971071  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:41.971142  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:41.997224  896760 cri.go:89] found id: ""
	I1208 00:40:41.997239  896760 logs.go:282] 0 containers: []
	W1208 00:40:41.997247  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:41.997253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:41.997315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:42.035665  896760 cri.go:89] found id: ""
	I1208 00:40:42.035680  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.035687  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:42.035692  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:42.035758  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:42.064088  896760 cri.go:89] found id: ""
	I1208 00:40:42.064103  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.064111  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:42.064117  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:42.064181  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:42.092740  896760 cri.go:89] found id: ""
	I1208 00:40:42.092757  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.092765  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:42.092771  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:42.092844  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:42.124291  896760 cri.go:89] found id: ""
	I1208 00:40:42.124309  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.124321  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:42.124329  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:42.124428  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:42.155416  896760 cri.go:89] found id: ""
	I1208 00:40:42.155431  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.155439  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:42.155445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:42.155515  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:42.188921  896760 cri.go:89] found id: ""
	I1208 00:40:42.188938  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.188945  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:42.188954  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:42.188965  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:42.249292  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:42.249321  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:42.266137  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:42.266155  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:42.342321  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:42.342333  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:42.342344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:42.406583  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:42.406602  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:44.937561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:44.948618  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:44.948679  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:44.982163  896760 cri.go:89] found id: ""
	I1208 00:40:44.982177  896760 logs.go:282] 0 containers: []
	W1208 00:40:44.982195  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:44.982202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:44.982276  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:45.033982  896760 cri.go:89] found id: ""
	I1208 00:40:45.033999  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.034008  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:45.034014  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:45.034085  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:45.089336  896760 cri.go:89] found id: ""
	I1208 00:40:45.089353  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.089362  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:45.089368  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:45.089437  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:45.132530  896760 cri.go:89] found id: ""
	I1208 00:40:45.132547  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.132555  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:45.132561  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:45.132672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:45.207404  896760 cri.go:89] found id: ""
	I1208 00:40:45.207423  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.207432  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:45.207438  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:45.207516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:45.247451  896760 cri.go:89] found id: ""
	I1208 00:40:45.247477  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.247486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:45.247493  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:45.247562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:45.291347  896760 cri.go:89] found id: ""
	I1208 00:40:45.291363  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.291373  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:45.291382  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:45.291393  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:45.358718  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:45.358739  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:45.375670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:45.375694  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:45.443052  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:45.443063  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:45.443075  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:45.507120  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:45.507142  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.037423  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:48.048528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:48.048599  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:48.078292  896760 cri.go:89] found id: ""
	I1208 00:40:48.078307  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.078314  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:48.078320  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:48.078380  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:48.103852  896760 cri.go:89] found id: ""
	I1208 00:40:48.103867  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.103874  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:48.103879  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:48.103938  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:48.129348  896760 cri.go:89] found id: ""
	I1208 00:40:48.129364  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.129371  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:48.129376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:48.129434  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:48.154375  896760 cri.go:89] found id: ""
	I1208 00:40:48.154390  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.154397  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:48.154402  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:48.154497  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:48.180043  896760 cri.go:89] found id: ""
	I1208 00:40:48.180058  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.180065  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:48.180070  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:48.180126  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:48.208497  896760 cri.go:89] found id: ""
	I1208 00:40:48.208511  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.208518  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:48.208524  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:48.208582  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:48.236937  896760 cri.go:89] found id: ""
	I1208 00:40:48.236960  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.236968  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:48.236975  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:48.236985  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:48.252020  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:48.252037  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:48.317246  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:48.317257  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:48.317267  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:48.381926  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:48.381947  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.410384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:48.410402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:50.965799  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:50.977456  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:50.977516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:51.008659  896760 cri.go:89] found id: ""
	I1208 00:40:51.008677  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.008685  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:51.008691  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:51.008763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:51.043130  896760 cri.go:89] found id: ""
	I1208 00:40:51.043144  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.043151  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:51.043157  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:51.043217  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:51.071991  896760 cri.go:89] found id: ""
	I1208 00:40:51.072014  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.072022  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:51.072028  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:51.072091  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:51.098639  896760 cri.go:89] found id: ""
	I1208 00:40:51.098654  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.098661  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:51.098667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:51.098727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:51.125133  896760 cri.go:89] found id: ""
	I1208 00:40:51.125147  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.125154  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:51.125159  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:51.125220  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:51.152232  896760 cri.go:89] found id: ""
	I1208 00:40:51.152247  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.152255  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:51.152271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:51.152333  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:51.181299  896760 cri.go:89] found id: ""
	I1208 00:40:51.181313  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.181321  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:51.181329  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:51.181339  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:51.243933  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:51.243955  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:51.272384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:51.272400  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:51.334024  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:51.334042  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:51.349155  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:51.349172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:51.419857  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:53.920516  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:53.931349  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:53.931410  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:53.960789  896760 cri.go:89] found id: ""
	I1208 00:40:53.960805  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.960816  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:53.960821  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:53.960887  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:53.991352  896760 cri.go:89] found id: ""
	I1208 00:40:53.991368  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.991376  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:53.991382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:53.991452  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:54.024088  896760 cri.go:89] found id: ""
	I1208 00:40:54.024103  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.024117  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:54.024123  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:54.024187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:54.051247  896760 cri.go:89] found id: ""
	I1208 00:40:54.051262  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.051269  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:54.051274  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:54.051335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:54.077953  896760 cri.go:89] found id: ""
	I1208 00:40:54.077968  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.077975  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:54.077985  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:54.078051  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:54.104672  896760 cri.go:89] found id: ""
	I1208 00:40:54.104686  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.104693  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:54.104699  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:54.104757  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:54.129936  896760 cri.go:89] found id: ""
	I1208 00:40:54.129950  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.129957  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:54.129965  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:54.129976  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:54.190590  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:54.190610  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:54.206141  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:54.206158  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:54.274636  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:54.274647  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:54.274658  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:54.343673  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:54.343693  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:56.875691  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:56.887842  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:56.887906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:56.916158  896760 cri.go:89] found id: ""
	I1208 00:40:56.916172  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.916179  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:56.916185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:56.916243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:56.940916  896760 cri.go:89] found id: ""
	I1208 00:40:56.940930  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.940937  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:56.940942  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:56.941002  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:56.964346  896760 cri.go:89] found id: ""
	I1208 00:40:56.964361  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.964368  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:56.964373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:56.964431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:56.989502  896760 cri.go:89] found id: ""
	I1208 00:40:56.989516  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.989523  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:56.989528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:56.989590  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:57.017437  896760 cri.go:89] found id: ""
	I1208 00:40:57.017452  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.017459  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:57.017465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:57.017527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:57.044860  896760 cri.go:89] found id: ""
	I1208 00:40:57.044873  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.044880  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:57.044886  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:57.044943  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:57.070028  896760 cri.go:89] found id: ""
	I1208 00:40:57.070043  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.070050  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:57.070058  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:57.070069  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:57.133938  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:57.133960  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:57.163813  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:57.163828  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:57.219970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:57.219990  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:57.234793  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:57.234810  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:57.297123  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:59.797409  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:59.807447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:59.807521  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:59.831111  896760 cri.go:89] found id: ""
	I1208 00:40:59.831126  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.831139  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:59.831145  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:59.831204  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:59.862164  896760 cri.go:89] found id: ""
	I1208 00:40:59.862178  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.862185  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:59.862190  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:59.862245  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:59.913907  896760 cri.go:89] found id: ""
	I1208 00:40:59.913921  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.913928  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:59.913933  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:59.913990  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:59.938219  896760 cri.go:89] found id: ""
	I1208 00:40:59.938235  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.938242  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:59.938247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:59.938309  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:59.965447  896760 cri.go:89] found id: ""
	I1208 00:40:59.965460  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.965479  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:59.965485  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:59.965551  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:59.989806  896760 cri.go:89] found id: ""
	I1208 00:40:59.989820  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.989827  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:59.989833  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:59.989891  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:00.115094  896760 cri.go:89] found id: ""
	I1208 00:41:00.115110  896760 logs.go:282] 0 containers: []
	W1208 00:41:00.115118  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:00.115126  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:00.115138  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:00.211003  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:00.211027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:00.261522  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:00.261543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:00.334293  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:00.334316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:00.381440  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:00.381465  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:00.482780  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:02.983027  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:02.993616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:02.993677  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:03.021098  896760 cri.go:89] found id: ""
	I1208 00:41:03.021114  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.021122  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:03.021128  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:03.021189  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:03.047499  896760 cri.go:89] found id: ""
	I1208 00:41:03.047521  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.047528  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:03.047534  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:03.047594  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:03.072719  896760 cri.go:89] found id: ""
	I1208 00:41:03.072749  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.072757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:03.072762  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:03.072841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:03.098912  896760 cri.go:89] found id: ""
	I1208 00:41:03.098927  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.098934  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:03.098939  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:03.099001  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:03.125225  896760 cri.go:89] found id: ""
	I1208 00:41:03.125239  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.125247  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:03.125252  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:03.125311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:03.151371  896760 cri.go:89] found id: ""
	I1208 00:41:03.151384  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.151392  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:03.151397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:03.151457  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:03.176410  896760 cri.go:89] found id: ""
	I1208 00:41:03.176424  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.176432  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:03.176439  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:03.176450  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:03.231731  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:03.231750  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:03.246857  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:03.246874  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:03.313632  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:03.313651  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:03.313662  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:03.381170  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:03.381190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:05.911707  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:05.922187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:05.922249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:05.946678  896760 cri.go:89] found id: ""
	I1208 00:41:05.946692  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.946698  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:05.946704  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:05.946760  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:05.971332  896760 cri.go:89] found id: ""
	I1208 00:41:05.971344  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.971351  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:05.971357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:05.971418  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:05.996173  896760 cri.go:89] found id: ""
	I1208 00:41:05.996187  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.996194  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:05.996200  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:05.996257  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:06.029475  896760 cri.go:89] found id: ""
	I1208 00:41:06.029489  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.029497  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:06.029502  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:06.029578  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:06.058996  896760 cri.go:89] found id: ""
	I1208 00:41:06.059009  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.059017  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:06.059022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:06.059079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:06.083207  896760 cri.go:89] found id: ""
	I1208 00:41:06.083220  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.083227  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:06.083233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:06.083301  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:06.108817  896760 cri.go:89] found id: ""
	I1208 00:41:06.108831  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.108848  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:06.108856  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:06.108867  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:06.124009  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:06.124027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:06.189487  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:06.189497  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:06.189509  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:06.253352  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:06.253372  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:06.285932  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:06.285948  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:08.842570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:08.854529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:08.854589  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:08.884339  896760 cri.go:89] found id: ""
	I1208 00:41:08.884355  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.884362  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:08.884367  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:08.884427  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:08.914891  896760 cri.go:89] found id: ""
	I1208 00:41:08.914905  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.914924  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:08.914929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:08.914998  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:08.941436  896760 cri.go:89] found id: ""
	I1208 00:41:08.941452  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.941459  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:08.941465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:08.941535  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:08.966802  896760 cri.go:89] found id: ""
	I1208 00:41:08.966816  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.966823  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:08.966829  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:08.966890  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:09.002946  896760 cri.go:89] found id: ""
	I1208 00:41:09.002962  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.002971  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:09.002977  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:09.003049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:09.031184  896760 cri.go:89] found id: ""
	I1208 00:41:09.031199  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.031207  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:09.031213  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:09.031288  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:09.055946  896760 cri.go:89] found id: ""
	I1208 00:41:09.055971  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.055979  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:09.055987  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:09.055997  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:09.121830  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:09.121850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:09.150682  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:09.150700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:09.214609  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:09.214636  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:09.230018  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:09.230035  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:09.298095  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:11.798922  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:11.809212  896760 kubeadm.go:602] duration metric: took 4m1.466236852s to restartPrimaryControlPlane
	W1208 00:41:11.809278  896760 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:41:11.810440  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:41:12.224260  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:41:12.238539  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:41:12.247058  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:41:12.247114  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:41:12.255525  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:41:12.255534  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:41:12.255586  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:41:12.263892  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:41:12.263953  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:41:12.271955  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:41:12.280091  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:41:12.280149  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:41:12.288143  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.296120  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:41:12.296196  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.303946  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:41:12.312368  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:41:12.312423  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:41:12.320463  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:41:12.364373  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:41:12.364695  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:41:12.438406  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:41:12.438492  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:41:12.438531  896760 kubeadm.go:319] OS: Linux
	I1208 00:41:12.438577  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:41:12.438625  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:41:12.438672  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:41:12.438719  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:41:12.438766  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:41:12.438813  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:41:12.438857  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:41:12.438904  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:41:12.438949  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:41:12.514836  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:41:12.514942  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:41:12.515034  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:41:12.521560  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:41:12.527008  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:41:12.527099  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:41:12.527164  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:41:12.527241  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:41:12.527300  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:41:12.527369  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:41:12.527423  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:41:12.527485  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:41:12.527544  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:41:12.527617  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:41:12.527688  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:41:12.527724  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:41:12.527778  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:41:13.245010  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:41:13.299392  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:41:13.614595  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:41:13.963710  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:41:14.175279  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:41:14.176043  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:41:14.180186  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:41:14.183629  896760 out.go:252]   - Booting up control plane ...
	I1208 00:41:14.183729  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:41:14.183806  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:41:14.184436  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:41:14.204887  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:41:14.204990  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:41:14.213421  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:41:14.213704  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:41:14.213908  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:41:14.352082  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:41:14.352289  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:45:14.352397  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00008019s
	I1208 00:45:14.352432  896760 kubeadm.go:319] 
	I1208 00:45:14.352488  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:45:14.352520  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:45:14.352633  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:45:14.352639  896760 kubeadm.go:319] 
	I1208 00:45:14.352742  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:45:14.352774  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:45:14.352803  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:45:14.352807  896760 kubeadm.go:319] 
	I1208 00:45:14.356965  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:45:14.357429  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:45:14.357540  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:45:14.357802  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 00:45:14.357807  896760 kubeadm.go:319] 
	I1208 00:45:14.357875  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:45:14.357995  896760 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00008019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:45:14.358087  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:45:14.770086  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:45:14.783732  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:45:14.783788  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:45:14.791646  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:45:14.791657  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:45:14.791710  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:45:14.799512  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:45:14.799569  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:45:14.807303  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:45:14.815223  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:45:14.815280  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:45:14.822916  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.831219  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:45:14.831274  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.838751  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:45:14.846479  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:45:14.846535  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:45:14.855105  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:45:14.892727  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:45:14.893019  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:45:14.958827  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:45:14.958888  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:45:14.958921  896760 kubeadm.go:319] OS: Linux
	I1208 00:45:14.958963  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:45:14.959008  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:45:14.959052  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:45:14.959097  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:45:14.959143  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:45:14.959192  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:45:14.959234  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:45:14.959279  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:45:14.959321  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:45:15.063986  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:45:15.064091  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:45:15.064182  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:45:15.072119  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:45:15.073836  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:45:15.073929  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:45:15.073997  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:45:15.074078  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:45:15.074847  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:45:15.074919  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:45:15.074970  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:45:15.075029  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:45:15.075086  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:45:15.075260  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:45:15.075466  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:45:15.075788  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:45:15.075847  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:45:15.207541  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:45:15.419182  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:45:15.708081  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:45:15.925468  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:45:16.152957  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:45:16.153669  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:45:16.156472  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:45:16.157817  896760 out.go:252]   - Booting up control plane ...
	I1208 00:45:16.157909  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:45:16.157987  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:45:16.159025  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:45:16.179954  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:45:16.180052  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:45:16.189229  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:45:16.190665  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:45:16.190709  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:45:16.336970  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:45:16.337083  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:49:16.337272  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305556s
	I1208 00:49:16.337296  896760 kubeadm.go:319] 
	I1208 00:49:16.337409  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:49:16.337518  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:49:16.337839  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:49:16.337849  896760 kubeadm.go:319] 
	I1208 00:49:16.338164  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:49:16.338221  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:49:16.338281  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:49:16.338285  896760 kubeadm.go:319] 
	I1208 00:49:16.344611  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:49:16.345152  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:49:16.345268  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:49:16.345632  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:49:16.345641  896760 kubeadm.go:319] 
	I1208 00:49:16.345722  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:49:16.345780  896760 kubeadm.go:403] duration metric: took 12m6.045651138s to StartCluster
	I1208 00:49:16.345820  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:49:16.345897  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:49:16.378062  896760 cri.go:89] found id: ""
	I1208 00:49:16.378080  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.378088  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:49:16.378094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:49:16.378167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:49:16.414010  896760 cri.go:89] found id: ""
	I1208 00:49:16.414024  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.414043  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:49:16.414048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:49:16.414116  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:49:16.441708  896760 cri.go:89] found id: ""
	I1208 00:49:16.441732  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.441739  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:49:16.441745  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:49:16.441816  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:49:16.469812  896760 cri.go:89] found id: ""
	I1208 00:49:16.469826  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.469833  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:49:16.469848  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:49:16.469906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:49:16.495155  896760 cri.go:89] found id: ""
	I1208 00:49:16.495170  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.495177  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:49:16.495183  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:49:16.495242  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:49:16.522141  896760 cri.go:89] found id: ""
	I1208 00:49:16.522155  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.522163  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:49:16.522168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:49:16.522227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:49:16.551643  896760 cri.go:89] found id: ""
	I1208 00:49:16.551656  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.551663  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:49:16.551671  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:49:16.551681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:49:16.614342  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:49:16.614362  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:49:16.644124  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:49:16.644140  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:49:16.703646  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:49:16.703665  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:49:16.718513  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:49:16.718530  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:49:16.782371  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 00:49:16.782383  896760 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:49:16.782409  896760 out.go:285] * 
	W1208 00:49:16.782515  896760 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.782535  896760 out.go:285] * 
	W1208 00:49:16.784660  896760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:49:16.789524  896760 out.go:203] 
	W1208 00:49:16.792367  896760 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.792413  896760 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:49:16.792436  896760 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:49:16.795713  896760 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919395045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919406500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919452219Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919473840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919487707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919499637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919508720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919528249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919545578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919576815Z" level=info msg="Connect containerd service"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919974424Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.920657812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935258404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935352461Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935383608Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935425134Z" level=info msg="Start recovering state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981163284Z" level=info msg="Start event monitor"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981372805Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981441769Z" level=info msg="Start streaming server"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981512023Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981572914Z" level=info msg="runtime interface starting up..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981643085Z" level=info msg="starting plugins..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981710277Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981908794Z" level=info msg="containerd successfully booted in 0.086733s"
	Dec 08 00:37:08 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:51:27.852422   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:27.853282   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:27.854870   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:27.855381   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:27.857128   23161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:51:27 up  5:34,  0 user,  load average: 0.43, 0.26, 0.56
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:51:24 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:25 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 08 00:51:25 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:25 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:25 functional-386544 kubelet[22994]: E1208 00:51:25.400991   22994 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:25 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:25 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:26 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 08 00:51:26 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:26 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:26 functional-386544 kubelet[23025]: E1208 00:51:26.144038   23025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:26 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:26 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:26 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 494.
	Dec 08 00:51:26 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:26 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:26 functional-386544 kubelet[23072]: E1208 00:51:26.906629   23072 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:26 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:26 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:27 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 495.
	Dec 08 00:51:27 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:27 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:27 functional-386544 kubelet[23116]: E1208 00:51:27.665783   23116 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:27 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:27 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (376.69932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-386544 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-386544 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.521109ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-386544 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-386544 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-386544 describe po hello-node-connect: exit status 1 (63.225468ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-386544 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-386544 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-386544 logs -l app=hello-node-connect: exit status 1 (56.504661ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-386544 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-386544 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-386544 describe svc hello-node-connect: exit status 1 (58.710928ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-386544 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (330.78621ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-386544 cache reload                                                                                                                               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:36 UTC │ 08 Dec 25 00:37 UTC │
	│ ssh     │ functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │ 08 Dec 25 00:37 UTC │
	│ kubectl │ functional-386544 kubectl -- --context functional-386544 get pods                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	│ start   │ -p functional-386544 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:37 UTC │                     │
	│ config  │ functional-386544 config unset cpus                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ cp      │ functional-386544 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ config  │ functional-386544 config get cpus                                                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │                     │
	│ config  │ functional-386544 config set cpus 2                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ config  │ functional-386544 config get cpus                                                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ config  │ functional-386544 config unset cpus                                                                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ ssh     │ functional-386544 ssh -n functional-386544 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ config  │ functional-386544 config get cpus                                                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │                     │
	│ ssh     │ functional-386544 ssh echo hello                                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ cp      │ functional-386544 cp functional-386544:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2533455381/001/cp-test.txt │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ ssh     │ functional-386544 ssh cat /etc/hostname                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ ssh     │ functional-386544 ssh -n functional-386544 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ tunnel  │ functional-386544 tunnel --alsologtostderr                                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │                     │
	│ tunnel  │ functional-386544 tunnel --alsologtostderr                                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │                     │
	│ cp      │ functional-386544 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ tunnel  │ functional-386544 tunnel --alsologtostderr                                                                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │                     │
	│ ssh     │ functional-386544 ssh -n functional-386544 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:49 UTC │ 08 Dec 25 00:49 UTC │
	│ addons  │ functional-386544 addons list                                                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ addons  │ functional-386544 addons list -o json                                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:37:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:37:06.019721  896760 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:37:06.019851  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.019855  896760 out.go:374] Setting ErrFile to fd 2...
	I1208 00:37:06.019858  896760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:37:06.020163  896760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:37:06.020664  896760 out.go:368] Setting JSON to false
	I1208 00:37:06.021613  896760 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":19179,"bootTime":1765135047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:37:06.021695  896760 start.go:143] virtualization:  
	I1208 00:37:06.025173  896760 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:37:06.029087  896760 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:37:06.029181  896760 notify.go:221] Checking for updates...
	I1208 00:37:06.035043  896760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:37:06.037984  896760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:37:06.041170  896760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:37:06.044080  896760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:37:06.047053  896760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:37:06.050554  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:06.050663  896760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:37:06.082313  896760 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:37:06.082426  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.147928  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.138471154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.148036  896760 docker.go:319] overlay module found
	I1208 00:37:06.151011  896760 out.go:179] * Using the docker driver based on existing profile
	I1208 00:37:06.153817  896760 start.go:309] selected driver: docker
	I1208 00:37:06.153826  896760 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.153925  896760 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:37:06.154035  896760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:37:06.211588  896760 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-08 00:37:06.202265066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:37:06.212013  896760 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 00:37:06.212038  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:06.212099  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:06.212152  896760 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:06.217244  896760 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
	I1208 00:37:06.220210  896760 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:37:06.223461  896760 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:37:06.226522  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:06.226581  896760 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:37:06.226589  896760 cache.go:65] Caching tarball of preloaded images
	I1208 00:37:06.226692  896760 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 00:37:06.226679  896760 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:37:06.226706  896760 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 00:37:06.226817  896760 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
	I1208 00:37:06.250884  896760 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:37:06.250894  896760 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 00:37:06.250908  896760 cache.go:243] Successfully downloaded all kic artifacts
	I1208 00:37:06.250945  896760 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 00:37:06.250999  896760 start.go:364] duration metric: took 38.401µs to acquireMachinesLock for "functional-386544"
	I1208 00:37:06.251017  896760 start.go:96] Skipping create...Using existing machine configuration
	I1208 00:37:06.251022  896760 fix.go:54] fixHost starting: 
	I1208 00:37:06.251283  896760 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
	I1208 00:37:06.268900  896760 fix.go:112] recreateIfNeeded on functional-386544: state=Running err=<nil>
	W1208 00:37:06.268920  896760 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 00:37:06.272102  896760 out.go:252] * Updating the running docker "functional-386544" container ...
	I1208 00:37:06.272127  896760 machine.go:94] provisionDockerMachine start ...
	I1208 00:37:06.272215  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.289500  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.289831  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.289837  896760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 00:37:06.446749  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.446764  896760 ubuntu.go:182] provisioning hostname "functional-386544"
	I1208 00:37:06.446826  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.466658  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.466960  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.466968  896760 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
	I1208 00:37:06.637199  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
	
	I1208 00:37:06.637280  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:06.656923  896760 main.go:143] libmachine: Using SSH client type: native
	I1208 00:37:06.657245  896760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I1208 00:37:06.657259  896760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 00:37:06.810893  896760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 00:37:06.810908  896760 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 00:37:06.810925  896760 ubuntu.go:190] setting up certificates
	I1208 00:37:06.810935  896760 provision.go:84] configureAuth start
	I1208 00:37:06.811016  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:06.829686  896760 provision.go:143] copyHostCerts
	I1208 00:37:06.829765  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 00:37:06.829784  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 00:37:06.829861  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 00:37:06.829960  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 00:37:06.829964  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 00:37:06.829992  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 00:37:06.830039  896760 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 00:37:06.830042  896760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 00:37:06.830063  896760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 00:37:06.830106  896760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
	I1208 00:37:07.178648  896760 provision.go:177] copyRemoteCerts
	I1208 00:37:07.178704  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 00:37:07.178748  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.196483  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.308383  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 00:37:07.329033  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 00:37:07.348621  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 00:37:07.367658  896760 provision.go:87] duration metric: took 556.701814ms to configureAuth
	I1208 00:37:07.367675  896760 ubuntu.go:206] setting minikube options for container-runtime
	I1208 00:37:07.367867  896760 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:37:07.367872  896760 machine.go:97] duration metric: took 1.095740792s to provisionDockerMachine
	I1208 00:37:07.367878  896760 start.go:293] postStartSetup for "functional-386544" (driver="docker")
	I1208 00:37:07.367889  896760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 00:37:07.367938  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 00:37:07.367977  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.392993  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.498867  896760 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 00:37:07.502617  896760 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 00:37:07.502635  896760 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 00:37:07.502647  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 00:37:07.502710  896760 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 00:37:07.502786  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 00:37:07.502867  896760 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
	I1208 00:37:07.502912  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
	I1208 00:37:07.511139  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:07.530267  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
	I1208 00:37:07.549480  896760 start.go:296] duration metric: took 181.586948ms for postStartSetup
	I1208 00:37:07.549558  896760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:37:07.549616  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.567759  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.671740  896760 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 00:37:07.676721  896760 fix.go:56] duration metric: took 1.425689657s for fixHost
	I1208 00:37:07.676741  896760 start.go:83] releasing machines lock for "functional-386544", held for 1.425734498s
	I1208 00:37:07.676811  896760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
	I1208 00:37:07.694624  896760 ssh_runner.go:195] Run: cat /version.json
	I1208 00:37:07.694669  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.694717  896760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 00:37:07.694775  896760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
	I1208 00:37:07.720790  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.720932  896760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
	I1208 00:37:07.911560  896760 ssh_runner.go:195] Run: systemctl --version
	I1208 00:37:07.918241  896760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 00:37:07.922676  896760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 00:37:07.922750  896760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 00:37:07.930831  896760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 00:37:07.930844  896760 start.go:496] detecting cgroup driver to use...
	I1208 00:37:07.930875  896760 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 00:37:07.930921  896760 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 00:37:07.947115  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 00:37:07.961050  896760 docker.go:218] disabling cri-docker service (if available) ...
	I1208 00:37:07.961113  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 00:37:07.977365  896760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 00:37:07.991192  896760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 00:37:08.126175  896760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 00:37:08.269608  896760 docker.go:234] disabling docker service ...
	I1208 00:37:08.269664  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 00:37:08.284945  896760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 00:37:08.299108  896760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 00:37:08.432565  896760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 00:37:08.555248  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 00:37:08.569474  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 00:37:08.585412  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 00:37:08.595004  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 00:37:08.604840  896760 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 00:37:08.604902  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 00:37:08.613812  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.623203  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 00:37:08.633142  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 00:37:08.643038  896760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 00:37:08.652239  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 00:37:08.661623  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 00:37:08.671250  896760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 00:37:08.680657  896760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 00:37:08.688616  896760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 00:37:08.696764  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:08.823042  896760 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 00:37:08.984184  896760 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 00:37:08.984277  896760 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 00:37:08.989088  896760 start.go:564] Will wait 60s for crictl version
	I1208 00:37:08.989158  896760 ssh_runner.go:195] Run: which crictl
	I1208 00:37:08.993493  896760 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 00:37:09.024246  896760 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 00:37:09.024323  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.048155  896760 ssh_runner.go:195] Run: containerd --version
	I1208 00:37:09.074342  896760 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 00:37:09.077377  896760 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 00:37:09.094080  896760 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1208 00:37:09.101988  896760 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1208 00:37:09.104771  896760 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 00:37:09.104921  896760 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 00:37:09.104997  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.131121  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.131133  896760 containerd.go:534] Images already preloaded, skipping extraction
	I1208 00:37:09.131193  896760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 00:37:09.156235  896760 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 00:37:09.156250  896760 cache_images.go:86] Images are preloaded, skipping loading
	I1208 00:37:09.156277  896760 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1208 00:37:09.156381  896760 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 00:37:09.156452  896760 ssh_runner.go:195] Run: sudo crictl info
	I1208 00:37:09.182781  896760 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1208 00:37:09.182799  896760 cni.go:84] Creating CNI manager for ""
	I1208 00:37:09.182812  896760 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:37:09.182826  896760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 00:37:09.182847  896760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 00:37:09.182951  896760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-386544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 00:37:09.183025  896760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 00:37:09.190958  896760 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 00:37:09.191018  896760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 00:37:09.198701  896760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 00:37:09.211735  896760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 00:37:09.225024  896760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1208 00:37:09.237969  896760 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1208 00:37:09.241818  896760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 00:37:09.362221  896760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 00:37:09.592794  896760 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
	I1208 00:37:09.592805  896760 certs.go:195] generating shared ca certs ...
	I1208 00:37:09.592820  896760 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:37:09.592963  896760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 00:37:09.593013  896760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 00:37:09.593019  896760 certs.go:257] generating profile certs ...
	I1208 00:37:09.593102  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
	I1208 00:37:09.593154  896760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
	I1208 00:37:09.593193  896760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
	I1208 00:37:09.593299  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 00:37:09.593334  896760 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 00:37:09.593340  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 00:37:09.593370  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 00:37:09.593392  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 00:37:09.593414  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 00:37:09.593455  896760 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 00:37:09.594053  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 00:37:09.614864  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 00:37:09.633613  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 00:37:09.652858  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 00:37:09.672208  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 00:37:09.691703  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 00:37:09.711394  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 00:37:09.730947  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 00:37:09.750211  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 00:37:09.769149  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 00:37:09.787710  896760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 00:37:09.806312  896760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 00:37:09.821128  896760 ssh_runner.go:195] Run: openssl version
	I1208 00:37:09.827672  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.835407  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 00:37:09.843631  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847882  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.847954  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 00:37:09.890017  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 00:37:09.897920  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.905917  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 00:37:09.913958  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918017  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.918088  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 00:37:09.960169  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 00:37:09.968154  896760 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.975996  896760 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 00:37:09.984080  896760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988210  896760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 00:37:09.988283  896760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 00:37:10.030981  896760 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 00:37:10.040434  896760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 00:37:10.045482  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 00:37:10.089037  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 00:37:10.131753  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 00:37:10.174120  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 00:37:10.216988  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 00:37:10.258490  896760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 00:37:10.300139  896760 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:37:10.300218  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 00:37:10.300290  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.333072  896760 cri.go:89] found id: ""
	I1208 00:37:10.333133  896760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 00:37:10.342949  896760 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 00:37:10.342966  896760 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 00:37:10.343020  896760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 00:37:10.351917  896760 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.352512  896760 kubeconfig.go:125] found "functional-386544" server: "https://192.168.49.2:8441"
	I1208 00:37:10.356488  896760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 00:37:10.371422  896760 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 00:22:35.509962182 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 00:37:09.232874988 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1208 00:37:10.371434  896760 kubeadm.go:1161] stopping kube-system containers ...
	I1208 00:37:10.371448  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1208 00:37:10.371510  896760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 00:37:10.399030  896760 cri.go:89] found id: ""
	I1208 00:37:10.399096  896760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 00:37:10.416716  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:37:10.425417  896760 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec  8 00:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  8 00:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  8 00:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  8 00:26 /etc/kubernetes/scheduler.conf
	
	I1208 00:37:10.425491  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:37:10.433870  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:37:10.441918  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.441981  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:37:10.450104  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.458339  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.458406  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:37:10.466222  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:37:10.474083  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 00:37:10.474143  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:37:10.482138  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:37:10.490230  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:10.544026  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.386589  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.605461  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.662330  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 00:37:11.710396  896760 api_server.go:52] waiting for apiserver process to appear ...
	I1208 00:37:11.710500  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.210751  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:12.710625  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.211368  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:13.710629  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.210663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:14.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:15.710895  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.211137  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:16.711373  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.211351  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:17.710569  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.210608  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:18.710907  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.211191  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:19.710689  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.210845  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:20.710623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.211163  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:21.711542  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.210600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:22.710988  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.210661  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:23.710658  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.210891  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:24.711295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.210648  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:25.710685  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.211112  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:26.711299  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.210714  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:27.710657  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:28.710683  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.210651  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:29.711193  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.210592  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:30.710674  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.211143  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:31.711278  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.211249  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:32.711431  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.211577  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:33.711520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.210627  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:34.711607  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.210653  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:35.711085  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.211213  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:36.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.210570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:37.710652  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:38.710632  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.210844  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:39.710667  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.210595  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:40.710997  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.210972  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:41.710639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.211558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:42.711501  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.211600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:43.711606  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.211418  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:44.711303  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.210746  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:45.710559  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.210639  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:46.710659  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.211497  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:47.711558  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.211385  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:48.710636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.210636  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:49.710883  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.211287  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:50.710590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.210917  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:51.710809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.210623  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:52.710673  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.210672  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:53.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.210578  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:54.710671  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.210617  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:55.711226  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.211295  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:56.711314  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.211406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:57.711446  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.211464  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:58.710703  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.211414  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:37:59.711319  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.210772  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:00.710561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.211386  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:01.710908  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.211262  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:02.710640  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.211590  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:03.710555  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.211517  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:04.711490  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.211619  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:05.710621  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.211045  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:06.710665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.210615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:07.710668  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.211521  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:08.711350  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.211355  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:09.711224  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.211378  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:10.710638  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.210954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:11.710606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:11.710708  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:11.736420  896760 cri.go:89] found id: ""
	I1208 00:38:11.736434  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.736442  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:11.736447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:11.736514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:11.760217  896760 cri.go:89] found id: ""
	I1208 00:38:11.760231  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.760238  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:11.760243  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:11.760300  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:11.784869  896760 cri.go:89] found id: ""
	I1208 00:38:11.784882  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.784895  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:11.784900  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:11.784963  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:11.809329  896760 cri.go:89] found id: ""
	I1208 00:38:11.809345  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.809352  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:11.809357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:11.809412  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:11.836937  896760 cri.go:89] found id: ""
	I1208 00:38:11.836951  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.836958  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:11.836964  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:11.837022  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:11.861979  896760 cri.go:89] found id: ""
	I1208 00:38:11.861993  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.862000  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:11.862006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:11.862067  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:11.891173  896760 cri.go:89] found id: ""
	I1208 00:38:11.891187  896760 logs.go:282] 0 containers: []
	W1208 00:38:11.891194  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:11.891202  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:11.891213  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:11.958401  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:11.947972   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.948491   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952189   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.952934   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:11.954407   10814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:11.958411  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:11.958422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:12.022654  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:12.022674  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:12.054077  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:12.054093  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:12.115415  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:12.115439  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.631602  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:14.646925  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:14.646987  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:14.674503  896760 cri.go:89] found id: ""
	I1208 00:38:14.674517  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.674524  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:14.674529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:14.674593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:14.699396  896760 cri.go:89] found id: ""
	I1208 00:38:14.699419  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.699426  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:14.699432  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:14.699503  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:14.724021  896760 cri.go:89] found id: ""
	I1208 00:38:14.724034  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.724042  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:14.724047  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:14.724106  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:14.753658  896760 cri.go:89] found id: ""
	I1208 00:38:14.753672  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.753679  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:14.753684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:14.753749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:14.781621  896760 cri.go:89] found id: ""
	I1208 00:38:14.781635  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.781643  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:14.781649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:14.781707  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:14.807494  896760 cri.go:89] found id: ""
	I1208 00:38:14.807509  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.807516  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:14.807521  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:14.807593  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:14.833097  896760 cri.go:89] found id: ""
	I1208 00:38:14.833112  896760 logs.go:282] 0 containers: []
	W1208 00:38:14.833119  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:14.833126  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:14.833136  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:14.889095  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:14.889114  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:14.903785  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:14.903800  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:14.971093  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:14.963141   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.963548   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965048   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.965374   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:14.966849   10927 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:14.971115  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:14.971126  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:15.034725  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:15.034748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:17.575615  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:17.586181  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:17.586244  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:17.614489  896760 cri.go:89] found id: ""
	I1208 00:38:17.614503  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.614510  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:17.614516  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:17.614591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:17.646217  896760 cri.go:89] found id: ""
	I1208 00:38:17.646238  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.646245  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:17.646250  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:17.646320  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:17.672670  896760 cri.go:89] found id: ""
	I1208 00:38:17.672684  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.672699  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:17.672705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:17.672771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:17.697872  896760 cri.go:89] found id: ""
	I1208 00:38:17.697886  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.697894  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:17.697899  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:17.697960  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:17.723061  896760 cri.go:89] found id: ""
	I1208 00:38:17.723075  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.723083  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:17.723088  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:17.723148  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:17.751201  896760 cri.go:89] found id: ""
	I1208 00:38:17.751215  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.751257  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:17.751263  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:17.751327  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:17.776877  896760 cri.go:89] found id: ""
	I1208 00:38:17.776898  896760 logs.go:282] 0 containers: []
	W1208 00:38:17.776906  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:17.776914  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:17.776924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:17.833629  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:17.833648  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:17.848545  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:17.848562  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:17.916466  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:17.907244   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.908811   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.909382   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.910922   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:17.911252   11035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:17.916477  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:17.916488  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:17.977728  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:17.977748  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:20.518003  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:20.528606  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:20.528668  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:20.553281  896760 cri.go:89] found id: ""
	I1208 00:38:20.553294  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.553301  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:20.553307  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:20.553362  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:20.578221  896760 cri.go:89] found id: ""
	I1208 00:38:20.578241  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.578249  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:20.578254  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:20.578315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:20.615636  896760 cri.go:89] found id: ""
	I1208 00:38:20.615650  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.615657  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:20.615662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:20.615717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:20.658083  896760 cri.go:89] found id: ""
	I1208 00:38:20.658097  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.658104  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:20.658109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:20.658167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:20.683361  896760 cri.go:89] found id: ""
	I1208 00:38:20.683375  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.683382  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:20.683387  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:20.683445  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:20.708740  896760 cri.go:89] found id: ""
	I1208 00:38:20.708754  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.708761  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:20.708767  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:20.708830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:20.733148  896760 cri.go:89] found id: ""
	I1208 00:38:20.733162  896760 logs.go:282] 0 containers: []
	W1208 00:38:20.733169  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:20.733177  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:20.733187  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:20.789345  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:20.789364  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:20.804329  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:20.804344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:20.869258  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:20.860745   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.861580   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863087   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.863478   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:20.865172   11141 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:20.869270  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:20.869280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:20.935198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:20.935220  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:23.463419  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:23.473440  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:23.473514  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:23.498380  896760 cri.go:89] found id: ""
	I1208 00:38:23.498395  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.498402  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:23.498407  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:23.498504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:23.524663  896760 cri.go:89] found id: ""
	I1208 00:38:23.524677  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.524683  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:23.524689  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:23.524749  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:23.554276  896760 cri.go:89] found id: ""
	I1208 00:38:23.554300  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.554308  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:23.554314  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:23.554373  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:23.581295  896760 cri.go:89] found id: ""
	I1208 00:38:23.581310  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.581317  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:23.581322  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:23.581394  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:23.609485  896760 cri.go:89] found id: ""
	I1208 00:38:23.609499  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.609506  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:23.609512  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:23.609568  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:23.639329  896760 cri.go:89] found id: ""
	I1208 00:38:23.639343  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.639350  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:23.639356  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:23.639415  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:23.666775  896760 cri.go:89] found id: ""
	I1208 00:38:23.666789  896760 logs.go:282] 0 containers: []
	W1208 00:38:23.666796  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:23.666804  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:23.666816  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:23.726052  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:23.726071  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:23.741283  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:23.741300  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:23.814882  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:23.806382   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.807106   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.808836   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.809397   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:23.811003   11246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:23.814894  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:23.814918  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:23.882172  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:23.882191  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.416809  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:26.427382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:26.427441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:26.455816  896760 cri.go:89] found id: ""
	I1208 00:38:26.455831  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.455838  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:26.455843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:26.455901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:26.481460  896760 cri.go:89] found id: ""
	I1208 00:38:26.481475  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.481482  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:26.481487  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:26.481552  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:26.511736  896760 cri.go:89] found id: ""
	I1208 00:38:26.511750  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.511757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:26.511764  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:26.511824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:26.538164  896760 cri.go:89] found id: ""
	I1208 00:38:26.538185  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.538192  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:26.538197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:26.538263  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:26.564400  896760 cri.go:89] found id: ""
	I1208 00:38:26.564415  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.564423  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:26.564428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:26.564499  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:26.592652  896760 cri.go:89] found id: ""
	I1208 00:38:26.592666  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.592684  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:26.592690  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:26.592756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:26.628887  896760 cri.go:89] found id: ""
	I1208 00:38:26.628913  896760 logs.go:282] 0 containers: []
	W1208 00:38:26.628920  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:26.628928  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:26.628939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:26.645510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:26.645526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:26.715196  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:26.706568   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.707169   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.708723   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.709144   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:26.710667   11349 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:26.715212  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:26.715223  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:26.776374  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:26.776415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:26.805091  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:26.805108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.367761  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:29.378770  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:29.378841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:29.407904  896760 cri.go:89] found id: ""
	I1208 00:38:29.407918  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.407925  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:29.407937  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:29.407996  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:29.439249  896760 cri.go:89] found id: ""
	I1208 00:38:29.439263  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.439270  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:29.439275  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:29.439335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:29.464738  896760 cri.go:89] found id: ""
	I1208 00:38:29.464752  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.464760  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:29.464765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:29.464821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:29.491063  896760 cri.go:89] found id: ""
	I1208 00:38:29.491077  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.491085  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:29.491094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:29.491170  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:29.516981  896760 cri.go:89] found id: ""
	I1208 00:38:29.516995  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.517003  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:29.517008  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:29.517068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:29.542623  896760 cri.go:89] found id: ""
	I1208 00:38:29.542637  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.542644  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:29.542649  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:29.542706  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:29.568339  896760 cri.go:89] found id: ""
	I1208 00:38:29.568354  896760 logs.go:282] 0 containers: []
	W1208 00:38:29.568361  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:29.568368  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:29.568377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:29.628127  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:29.628145  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:29.643477  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:29.643493  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:29.719175  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:29.710217   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.710931   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.711810   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.713478   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:29.714040   11453 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:29.719187  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:29.719198  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:29.782292  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:29.782317  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.310785  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:32.321344  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:32.321408  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:32.347142  896760 cri.go:89] found id: ""
	I1208 00:38:32.347156  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.347163  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:32.347184  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:32.347243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:32.372733  896760 cri.go:89] found id: ""
	I1208 00:38:32.372748  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.372784  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:32.372789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:32.372848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:32.397366  896760 cri.go:89] found id: ""
	I1208 00:38:32.397381  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.397388  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:32.397394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:32.397458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:32.422998  896760 cri.go:89] found id: ""
	I1208 00:38:32.423012  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.423019  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:32.423025  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:32.423092  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:32.454075  896760 cri.go:89] found id: ""
	I1208 00:38:32.454089  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.454096  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:32.454102  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:32.454163  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:32.480907  896760 cri.go:89] found id: ""
	I1208 00:38:32.480931  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.480938  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:32.480945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:32.481033  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:32.508537  896760 cri.go:89] found id: ""
	I1208 00:38:32.508551  896760 logs.go:282] 0 containers: []
	W1208 00:38:32.508559  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:32.508567  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:32.508577  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:32.536959  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:32.536977  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:32.594663  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:32.594683  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:32.611007  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:32.611023  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:32.685259  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:32.676109   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.676744   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.678716   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.679323   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:32.681064   11569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:32.685271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:32.685293  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.252296  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:35.262679  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:35.262743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:35.288362  896760 cri.go:89] found id: ""
	I1208 00:38:35.288376  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.288384  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:35.288389  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:35.288459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:35.316681  896760 cri.go:89] found id: ""
	I1208 00:38:35.316694  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.316702  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:35.316708  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:35.316771  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:35.341646  896760 cri.go:89] found id: ""
	I1208 00:38:35.341661  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.341668  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:35.341673  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:35.341737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:35.367257  896760 cri.go:89] found id: ""
	I1208 00:38:35.367271  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.367278  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:35.367284  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:35.367343  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:35.391511  896760 cri.go:89] found id: ""
	I1208 00:38:35.391526  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.391533  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:35.391538  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:35.391607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:35.416046  896760 cri.go:89] found id: ""
	I1208 00:38:35.416059  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.416067  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:35.416073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:35.416186  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:35.441892  896760 cri.go:89] found id: ""
	I1208 00:38:35.441906  896760 logs.go:282] 0 containers: []
	W1208 00:38:35.441913  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:35.441921  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:35.441930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:35.498141  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:35.498159  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:35.513190  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:35.513206  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:35.577909  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:35.569957   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.570570   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572191   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.572549   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:35.574028   11659 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:35.577920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:35.577930  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:35.650521  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:35.650540  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:38.186415  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:38.196707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:38.196765  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:38.224642  896760 cri.go:89] found id: ""
	I1208 00:38:38.224656  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.224662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:38.224667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:38.224727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:38.250371  896760 cri.go:89] found id: ""
	I1208 00:38:38.250385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.250393  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:38.250397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:38.250490  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:38.275798  896760 cri.go:89] found id: ""
	I1208 00:38:38.275813  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.275820  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:38.275825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:38.275889  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:38.301371  896760 cri.go:89] found id: ""
	I1208 00:38:38.301385  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.301393  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:38.301398  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:38.301458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:38.326436  896760 cri.go:89] found id: ""
	I1208 00:38:38.326475  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.326483  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:38.326489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:38.326548  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:38.352684  896760 cri.go:89] found id: ""
	I1208 00:38:38.352698  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.352705  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:38.352711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:38.352770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:38.377358  896760 cri.go:89] found id: ""
	I1208 00:38:38.377372  896760 logs.go:282] 0 containers: []
	W1208 00:38:38.377379  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:38.377424  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:38.377434  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:38.433300  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:38.433319  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:38.448010  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:38.448031  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:38.509419  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:38.500422   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.500861   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.502805   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.503325   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:38.504822   11763 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:38.509429  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:38.509441  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:38.573641  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:38.573660  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:41.124146  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:41.134622  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:41.134687  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:41.158822  896760 cri.go:89] found id: ""
	I1208 00:38:41.158837  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.158844  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:41.158850  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:41.158907  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:41.183538  896760 cri.go:89] found id: ""
	I1208 00:38:41.183552  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.183559  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:41.183564  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:41.183621  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:41.211762  896760 cri.go:89] found id: ""
	I1208 00:38:41.211776  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.211783  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:41.211789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:41.211846  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:41.237660  896760 cri.go:89] found id: ""
	I1208 00:38:41.237674  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.237681  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:41.237687  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:41.237746  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:41.263629  896760 cri.go:89] found id: ""
	I1208 00:38:41.263644  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.263651  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:41.263656  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:41.263715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:41.289465  896760 cri.go:89] found id: ""
	I1208 00:38:41.289479  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.289486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:41.289498  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:41.289559  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:41.316932  896760 cri.go:89] found id: ""
	I1208 00:38:41.316948  896760 logs.go:282] 0 containers: []
	W1208 00:38:41.316955  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:41.316963  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:41.316974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:41.380746  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:41.380766  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:41.395918  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:41.395934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:41.460910  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:41.451440   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.453086   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454340   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.454987   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:41.456712   11870 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:41.460920  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:41.460932  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:41.524405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:41.524433  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.057087  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:44.067409  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:44.067469  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:44.092978  896760 cri.go:89] found id: ""
	I1208 00:38:44.092992  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.093000  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:44.093005  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:44.093063  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:44.118425  896760 cri.go:89] found id: ""
	I1208 00:38:44.118439  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.118468  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:44.118473  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:44.118537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:44.147582  896760 cri.go:89] found id: ""
	I1208 00:38:44.147597  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.147605  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:44.147610  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:44.147672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:44.173039  896760 cri.go:89] found id: ""
	I1208 00:38:44.173052  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.173060  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:44.173066  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:44.173122  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:44.200035  896760 cri.go:89] found id: ""
	I1208 00:38:44.200048  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.200056  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:44.200064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:44.200124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:44.228628  896760 cri.go:89] found id: ""
	I1208 00:38:44.228643  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.228652  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:44.228658  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:44.228723  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:44.253637  896760 cri.go:89] found id: ""
	I1208 00:38:44.253651  896760 logs.go:282] 0 containers: []
	W1208 00:38:44.253658  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:44.253666  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:44.253678  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:44.285985  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:44.286001  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:44.342819  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:44.342837  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:44.357562  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:44.357578  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:44.424802  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:44.416639   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.417220   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.418704   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.419086   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:44.420560   11986 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:44.424813  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:44.424823  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:46.987663  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:46.998722  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:46.998782  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:47.028919  896760 cri.go:89] found id: ""
	I1208 00:38:47.028933  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.028941  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:47.028947  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:47.029019  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:47.054503  896760 cri.go:89] found id: ""
	I1208 00:38:47.054517  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.054524  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:47.054529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:47.054591  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:47.080198  896760 cri.go:89] found id: ""
	I1208 00:38:47.080213  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.080220  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:47.080226  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:47.080295  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:47.109584  896760 cri.go:89] found id: ""
	I1208 00:38:47.109600  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.109615  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:47.109621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:47.109705  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:47.140105  896760 cri.go:89] found id: ""
	I1208 00:38:47.140121  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.140128  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:47.140134  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:47.140194  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:47.170105  896760 cri.go:89] found id: ""
	I1208 00:38:47.170119  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.170126  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:47.170131  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:47.170192  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:47.194381  896760 cri.go:89] found id: ""
	I1208 00:38:47.194396  896760 logs.go:282] 0 containers: []
	W1208 00:38:47.194403  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:47.194411  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:47.194421  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:47.250853  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:47.250872  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:47.265858  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:47.265878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:47.337098  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:47.328184   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.328652   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330348   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.330938   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:47.332507   12078 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:47.337113  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:47.337129  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:47.400033  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:47.400053  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:49.930600  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:49.941210  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:49.941272  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:49.965928  896760 cri.go:89] found id: ""
	I1208 00:38:49.965942  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.965949  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:49.965954  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:49.966013  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:49.991571  896760 cri.go:89] found id: ""
	I1208 00:38:49.991585  896760 logs.go:282] 0 containers: []
	W1208 00:38:49.991592  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:49.991597  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:49.991661  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:50.031199  896760 cri.go:89] found id: ""
	I1208 00:38:50.031218  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.031226  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:50.031233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:50.031308  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:50.058807  896760 cri.go:89] found id: ""
	I1208 00:38:50.058822  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.058830  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:50.058836  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:50.058898  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:50.089259  896760 cri.go:89] found id: ""
	I1208 00:38:50.089273  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.089281  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:50.089287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:50.089360  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:50.115363  896760 cri.go:89] found id: ""
	I1208 00:38:50.115377  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.115385  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:50.115391  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:50.115454  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:50.144975  896760 cri.go:89] found id: ""
	I1208 00:38:50.144990  896760 logs.go:282] 0 containers: []
	W1208 00:38:50.144998  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:50.145006  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:50.145020  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:50.160213  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:50.160230  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:50.226659  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:50.218140   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.218841   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220384   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.220991   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:50.222647   12182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:50.226669  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:50.226681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:50.288844  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:50.288865  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:50.321807  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:50.321824  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:52.878758  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:52.892078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:52.892141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:52.919955  896760 cri.go:89] found id: ""
	I1208 00:38:52.919969  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.919977  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:52.919982  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:52.920041  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:52.946242  896760 cri.go:89] found id: ""
	I1208 00:38:52.946256  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.946264  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:52.946269  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:52.946331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:52.976452  896760 cri.go:89] found id: ""
	I1208 00:38:52.976467  896760 logs.go:282] 0 containers: []
	W1208 00:38:52.976475  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:52.976480  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:52.976542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:53.005608  896760 cri.go:89] found id: ""
	I1208 00:38:53.005635  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.005644  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:53.005652  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:53.005729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:53.033758  896760 cri.go:89] found id: ""
	I1208 00:38:53.033773  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.033784  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:53.033789  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:53.033848  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:53.063554  896760 cri.go:89] found id: ""
	I1208 00:38:53.063568  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.063575  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:53.063581  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:53.063644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:53.093217  896760 cri.go:89] found id: ""
	I1208 00:38:53.093233  896760 logs.go:282] 0 containers: []
	W1208 00:38:53.093241  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:53.093249  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:53.093260  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:53.152571  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:53.152591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:53.167769  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:53.167785  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:53.232572  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:53.223864   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.224537   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226056   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.226508   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:53.228124   12287 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:53.232583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:53.232604  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:53.301625  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:53.301653  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:55.831231  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:55.843576  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:55.843680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:55.878177  896760 cri.go:89] found id: ""
	I1208 00:38:55.878191  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.878198  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:55.878203  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:55.878260  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:55.904640  896760 cri.go:89] found id: ""
	I1208 00:38:55.904660  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.904667  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:55.904672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:55.904729  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:55.930143  896760 cri.go:89] found id: ""
	I1208 00:38:55.930156  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.930163  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:55.930168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:55.930223  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:55.954696  896760 cri.go:89] found id: ""
	I1208 00:38:55.954710  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.954717  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:55.954723  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:55.954779  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:55.979424  896760 cri.go:89] found id: ""
	I1208 00:38:55.979438  896760 logs.go:282] 0 containers: []
	W1208 00:38:55.979445  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:55.979453  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:55.979513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:56.010864  896760 cri.go:89] found id: ""
	I1208 00:38:56.010879  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.010887  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:56.010893  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:56.010959  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:56.038141  896760 cri.go:89] found id: ""
	I1208 00:38:56.038155  896760 logs.go:282] 0 containers: []
	W1208 00:38:56.038163  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:56.038171  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:56.038183  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:56.105328  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:56.097052   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.097715   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099291   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.099797   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:56.101323   12386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:56.105339  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:56.105350  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:38:56.167859  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:56.167878  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:56.195618  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:56.195634  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:56.254386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:56.254406  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:58.770585  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:38:58.780949  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:38:58.781010  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:38:58.804624  896760 cri.go:89] found id: ""
	I1208 00:38:58.804638  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.804645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:38:58.804651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:38:58.804710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:38:58.830257  896760 cri.go:89] found id: ""
	I1208 00:38:58.830271  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.830278  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:38:58.830283  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:38:58.830341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:38:58.870359  896760 cri.go:89] found id: ""
	I1208 00:38:58.870383  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.870390  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:38:58.870396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:38:58.870501  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:38:58.897347  896760 cri.go:89] found id: ""
	I1208 00:38:58.897361  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.897368  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:38:58.897373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:38:58.897431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:38:58.927474  896760 cri.go:89] found id: ""
	I1208 00:38:58.927488  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.927496  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:38:58.927501  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:38:58.927563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:38:58.953358  896760 cri.go:89] found id: ""
	I1208 00:38:58.953372  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.953380  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:38:58.953386  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:38:58.953443  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:38:58.978092  896760 cri.go:89] found id: ""
	I1208 00:38:58.978107  896760 logs.go:282] 0 containers: []
	W1208 00:38:58.978116  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:38:58.978124  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:38:58.978134  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:38:59.008505  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:38:59.008524  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:38:59.067065  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:38:59.067095  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:38:59.081827  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:38:59.081843  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:38:59.148151  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:38:59.137399   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.138082   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.141464   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.142167   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:38:59.143901   12509 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:38:59.148161  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:38:59.148172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:01.713848  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:01.724264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:01.724326  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:01.752237  896760 cri.go:89] found id: ""
	I1208 00:39:01.752251  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.752258  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:01.752264  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:01.752325  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:01.778116  896760 cri.go:89] found id: ""
	I1208 00:39:01.778129  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.778136  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:01.778141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:01.778213  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:01.807711  896760 cri.go:89] found id: ""
	I1208 00:39:01.807725  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.807731  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:01.807737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:01.807798  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:01.836797  896760 cri.go:89] found id: ""
	I1208 00:39:01.836812  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.836820  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:01.836826  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:01.836884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:01.863221  896760 cri.go:89] found id: ""
	I1208 00:39:01.863235  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.863242  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:01.863247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:01.863307  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:01.902460  896760 cri.go:89] found id: ""
	I1208 00:39:01.902476  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.902483  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:01.902489  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:01.902558  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:01.930861  896760 cri.go:89] found id: ""
	I1208 00:39:01.930874  896760 logs.go:282] 0 containers: []
	W1208 00:39:01.930882  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:01.930889  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:01.930900  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:01.987172  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:01.987190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:02.006975  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:02.006993  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:02.075975  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:02.066621   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.067482   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069163   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.069825   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:02.071608   12607 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:02.076005  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:02.076017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:02.142423  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:02.142453  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:04.675643  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:04.688662  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:04.688743  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:04.716050  896760 cri.go:89] found id: ""
	I1208 00:39:04.716065  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.716072  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:04.716078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:04.716141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:04.742668  896760 cri.go:89] found id: ""
	I1208 00:39:04.742682  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.742690  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:04.742695  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:04.742756  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:04.769375  896760 cri.go:89] found id: ""
	I1208 00:39:04.769388  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.769396  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:04.769401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:04.769459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:04.795270  896760 cri.go:89] found id: ""
	I1208 00:39:04.795284  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.795291  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:04.795297  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:04.795354  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:04.822245  896760 cri.go:89] found id: ""
	I1208 00:39:04.822258  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.822265  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:04.822271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:04.822330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:04.859401  896760 cri.go:89] found id: ""
	I1208 00:39:04.859414  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.859422  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:04.859428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:04.859486  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:04.896707  896760 cri.go:89] found id: ""
	I1208 00:39:04.896721  896760 logs.go:282] 0 containers: []
	W1208 00:39:04.896728  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:04.896736  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:04.896745  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:04.967586  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:04.967603  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:04.983057  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:04.983080  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:05.060799  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:05.051458   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.052360   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054092   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.054784   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:05.056564   12716 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:05.060821  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:05.060832  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:05.123856  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:05.123875  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:07.653529  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:07.664109  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:07.664168  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:07.689362  896760 cri.go:89] found id: ""
	I1208 00:39:07.689376  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.689383  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:07.689388  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:07.689448  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:07.714707  896760 cri.go:89] found id: ""
	I1208 00:39:07.714722  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.714729  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:07.714734  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:07.714792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:07.740750  896760 cri.go:89] found id: ""
	I1208 00:39:07.740765  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.740771  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:07.740777  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:07.740834  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:07.765622  896760 cri.go:89] found id: ""
	I1208 00:39:07.765637  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.765645  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:07.765650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:07.765714  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:07.790729  896760 cri.go:89] found id: ""
	I1208 00:39:07.790744  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.790751  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:07.790756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:07.790824  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:07.821100  896760 cri.go:89] found id: ""
	I1208 00:39:07.821114  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.821122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:07.821127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:07.821185  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:07.855011  896760 cri.go:89] found id: ""
	I1208 00:39:07.855025  896760 logs.go:282] 0 containers: []
	W1208 00:39:07.855042  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:07.855050  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:07.855061  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:07.916163  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:07.916184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:07.931656  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:07.931672  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:08.007997  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:07.997166   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.997803   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999372   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:07.999735   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:08.001309   12823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:08.008026  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:08.008039  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:08.079922  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:08.079944  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:10.614429  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:10.625953  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:10.626015  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:10.651687  896760 cri.go:89] found id: ""
	I1208 00:39:10.651701  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.651708  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:10.651714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:10.651774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:10.676412  896760 cri.go:89] found id: ""
	I1208 00:39:10.676426  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.676433  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:10.676439  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:10.676507  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:10.705971  896760 cri.go:89] found id: ""
	I1208 00:39:10.705986  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.705992  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:10.705998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:10.706058  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:10.730598  896760 cri.go:89] found id: ""
	I1208 00:39:10.730621  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.730629  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:10.730634  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:10.730695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:10.756672  896760 cri.go:89] found id: ""
	I1208 00:39:10.756694  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.756702  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:10.756707  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:10.756770  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:10.786647  896760 cri.go:89] found id: ""
	I1208 00:39:10.786671  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.786679  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:10.786685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:10.786753  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:10.813024  896760 cri.go:89] found id: ""
	I1208 00:39:10.813037  896760 logs.go:282] 0 containers: []
	W1208 00:39:10.813045  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:10.813063  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:10.813074  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:10.870687  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:10.870718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:10.887434  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:10.887451  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:10.955043  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:10.946414   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.947120   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.948862   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.949480   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:10.951168   12930 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:10.955053  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:10.955064  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:11.016735  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:11.016756  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:13.547727  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:13.558158  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:13.558216  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:13.583025  896760 cri.go:89] found id: ""
	I1208 00:39:13.583045  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.583053  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:13.583058  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:13.583119  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:13.608731  896760 cri.go:89] found id: ""
	I1208 00:39:13.608744  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.608751  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:13.608756  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:13.608815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:13.634817  896760 cri.go:89] found id: ""
	I1208 00:39:13.634831  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.634838  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:13.634843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:13.634905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:13.659255  896760 cri.go:89] found id: ""
	I1208 00:39:13.659269  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.659276  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:13.659281  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:13.659341  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:13.683853  896760 cri.go:89] found id: ""
	I1208 00:39:13.683867  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.683882  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:13.683888  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:13.683949  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:13.708780  896760 cri.go:89] found id: ""
	I1208 00:39:13.708795  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.708802  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:13.708807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:13.708864  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:13.734678  896760 cri.go:89] found id: ""
	I1208 00:39:13.734692  896760 logs.go:282] 0 containers: []
	W1208 00:39:13.734699  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:13.734708  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:13.734718  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:13.790576  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:13.790597  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:13.805551  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:13.805567  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:13.884689  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:13.874759   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.875563   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.877382   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.878022   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:13.879730   13027 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:13.884710  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:13.884721  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:13.954356  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:13.954379  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:16.485706  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:16.496517  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:16.496577  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:16.526348  896760 cri.go:89] found id: ""
	I1208 00:39:16.526363  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.526370  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:16.526376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:16.526459  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:16.551935  896760 cri.go:89] found id: ""
	I1208 00:39:16.551949  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.551962  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:16.551968  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:16.552028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:16.576320  896760 cri.go:89] found id: ""
	I1208 00:39:16.576333  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.576340  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:16.576345  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:16.576403  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:16.605756  896760 cri.go:89] found id: ""
	I1208 00:39:16.605770  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.605777  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:16.605783  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:16.605839  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:16.632121  896760 cri.go:89] found id: ""
	I1208 00:39:16.632134  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.632141  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:16.632146  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:16.632203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:16.660423  896760 cri.go:89] found id: ""
	I1208 00:39:16.660437  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.660444  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:16.660450  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:16.660531  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:16.685576  896760 cri.go:89] found id: ""
	I1208 00:39:16.685595  896760 logs.go:282] 0 containers: []
	W1208 00:39:16.685602  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:16.685610  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:16.685620  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:16.740694  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:16.740712  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:16.755790  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:16.755806  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:16.821132  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:16.812998   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.813793   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.815524   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.816081   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:16.817224   13133 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:16.821152  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:16.821164  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:16.887057  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:16.887076  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:19.418598  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:19.428681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:19.428748  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:19.455940  896760 cri.go:89] found id: ""
	I1208 00:39:19.455953  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.455961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:19.455966  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:19.456027  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:19.482046  896760 cri.go:89] found id: ""
	I1208 00:39:19.482060  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.482067  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:19.482073  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:19.482130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:19.510706  896760 cri.go:89] found id: ""
	I1208 00:39:19.510720  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.510728  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:19.510733  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:19.510792  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:19.535505  896760 cri.go:89] found id: ""
	I1208 00:39:19.535520  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.535528  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:19.535533  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:19.535601  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:19.560234  896760 cri.go:89] found id: ""
	I1208 00:39:19.560248  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.560255  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:19.560261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:19.560328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:19.584606  896760 cri.go:89] found id: ""
	I1208 00:39:19.584621  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.584629  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:19.584637  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:19.584695  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:19.613195  896760 cri.go:89] found id: ""
	I1208 00:39:19.613226  896760 logs.go:282] 0 containers: []
	W1208 00:39:19.613234  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:19.613242  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:19.613252  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:19.670165  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:19.670184  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:19.685327  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:19.685351  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:19.749894  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:19.740851   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.741291   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743168   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.743802   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:19.745682   13240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:19.749914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:19.749928  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:19.812758  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:19.812779  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:22.352520  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:22.362719  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:22.362790  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:22.387649  896760 cri.go:89] found id: ""
	I1208 00:39:22.387662  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.387669  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:22.387675  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:22.387734  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:22.416444  896760 cri.go:89] found id: ""
	I1208 00:39:22.416458  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.416465  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:22.416470  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:22.416538  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:22.442291  896760 cri.go:89] found id: ""
	I1208 00:39:22.442305  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.442312  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:22.442317  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:22.442377  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:22.466919  896760 cri.go:89] found id: ""
	I1208 00:39:22.466933  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.466940  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:22.466945  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:22.467011  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:22.492435  896760 cri.go:89] found id: ""
	I1208 00:39:22.492449  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.492456  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:22.492461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:22.492526  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:22.518157  896760 cri.go:89] found id: ""
	I1208 00:39:22.518183  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.518190  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:22.518197  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:22.518266  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:22.544341  896760 cri.go:89] found id: ""
	I1208 00:39:22.544356  896760 logs.go:282] 0 containers: []
	W1208 00:39:22.544363  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:22.544371  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:22.544389  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:22.601655  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:22.601676  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:22.617670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:22.617700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:22.686714  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:22.677722   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.678617   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680248   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.680721   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:22.682462   13348 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:22.686725  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:22.686736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:22.749600  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:22.749621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.281783  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:25.292163  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:25.292227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:25.316234  896760 cri.go:89] found id: ""
	I1208 00:39:25.316249  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.316257  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:25.316262  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:25.316330  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:25.350433  896760 cri.go:89] found id: ""
	I1208 00:39:25.350478  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.350485  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:25.350491  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:25.350562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:25.376982  896760 cri.go:89] found id: ""
	I1208 00:39:25.376996  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.377004  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:25.377009  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:25.377076  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:25.402484  896760 cri.go:89] found id: ""
	I1208 00:39:25.402499  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.402506  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:25.402511  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:25.402580  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:25.429596  896760 cri.go:89] found id: ""
	I1208 00:39:25.429611  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.429618  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:25.429624  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:25.429692  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:25.455037  896760 cri.go:89] found id: ""
	I1208 00:39:25.455051  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.455059  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:25.455064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:25.455130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:25.484391  896760 cri.go:89] found id: ""
	I1208 00:39:25.484404  896760 logs.go:282] 0 containers: []
	W1208 00:39:25.484412  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:25.484420  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:25.484430  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:25.512262  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:25.512282  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:25.569524  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:25.569543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:25.584301  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:25.584316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:25.650571  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:25.642290   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.642931   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.644604   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.645179   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:25.646810   13464 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:25.650583  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:25.650594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.218586  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:28.229069  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:28.229127  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:28.254473  896760 cri.go:89] found id: ""
	I1208 00:39:28.254487  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.254494  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:28.254499  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:28.254563  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:28.283388  896760 cri.go:89] found id: ""
	I1208 00:39:28.283403  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.283410  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:28.283418  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:28.283475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:28.310968  896760 cri.go:89] found id: ""
	I1208 00:39:28.310983  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.310990  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:28.310995  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:28.311061  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:28.336049  896760 cri.go:89] found id: ""
	I1208 00:39:28.336064  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.336072  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:28.336078  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:28.336141  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:28.360451  896760 cri.go:89] found id: ""
	I1208 00:39:28.360464  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.360470  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:28.360475  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:28.360542  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:28.385117  896760 cri.go:89] found id: ""
	I1208 00:39:28.385131  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.385138  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:28.385143  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:28.385196  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:28.408915  896760 cri.go:89] found id: ""
	I1208 00:39:28.408928  896760 logs.go:282] 0 containers: []
	W1208 00:39:28.408935  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:28.408943  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:28.408953  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:28.423316  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:28.423332  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:28.486812  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:28.478402   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.479218   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.480768   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.481243   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:28.482870   13555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:28.486823  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:28.486833  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:28.553325  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:28.553344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:28.582011  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:28.582027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.143204  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:31.154196  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:31.154264  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:31.181624  896760 cri.go:89] found id: ""
	I1208 00:39:31.181638  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.181645  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:31.181650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:31.181713  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:31.207658  896760 cri.go:89] found id: ""
	I1208 00:39:31.207672  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.207679  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:31.207684  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:31.207742  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:31.233323  896760 cri.go:89] found id: ""
	I1208 00:39:31.233338  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.233345  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:31.233351  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:31.233411  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:31.258320  896760 cri.go:89] found id: ""
	I1208 00:39:31.258335  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.258342  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:31.258347  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:31.258406  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:31.283846  896760 cri.go:89] found id: ""
	I1208 00:39:31.283860  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.283868  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:31.283873  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:31.283931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:31.310064  896760 cri.go:89] found id: ""
	I1208 00:39:31.310079  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.310086  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:31.310091  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:31.310149  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:31.337328  896760 cri.go:89] found id: ""
	I1208 00:39:31.337350  896760 logs.go:282] 0 containers: []
	W1208 00:39:31.337358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:31.337367  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:31.337377  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:31.392950  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:31.392969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:31.407922  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:31.407939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:31.474904  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:31.466634   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.467255   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.468771   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.469240   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:31.470878   13663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:31.474915  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:31.474925  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:31.536814  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:31.536834  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:34.069082  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:34.079471  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:34.079532  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:34.121832  896760 cri.go:89] found id: ""
	I1208 00:39:34.121846  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.121853  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:34.121859  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:34.121923  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:34.151527  896760 cri.go:89] found id: ""
	I1208 00:39:34.151541  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.151548  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:34.151553  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:34.151613  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:34.179098  896760 cri.go:89] found id: ""
	I1208 00:39:34.179113  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.179121  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:34.179126  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:34.179184  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:34.209529  896760 cri.go:89] found id: ""
	I1208 00:39:34.209548  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.209563  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:34.209568  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:34.209655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:34.238235  896760 cri.go:89] found id: ""
	I1208 00:39:34.238249  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.238256  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:34.238261  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:34.238318  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:34.263739  896760 cri.go:89] found id: ""
	I1208 00:39:34.263752  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.263760  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:34.263765  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:34.263838  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:34.293315  896760 cri.go:89] found id: ""
	I1208 00:39:34.293330  896760 logs.go:282] 0 containers: []
	W1208 00:39:34.293337  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:34.293345  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:34.293356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:34.348849  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:34.348873  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:34.363941  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:34.363958  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:34.430475  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:34.421874   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.422434   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424055   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.424602   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:34.426357   13770 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:34.430487  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:34.430501  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:34.492396  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:34.492415  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.025457  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:37.036130  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:37.036201  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:37.063581  896760 cri.go:89] found id: ""
	I1208 00:39:37.063595  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.063602  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:37.063609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:37.063672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:37.088301  896760 cri.go:89] found id: ""
	I1208 00:39:37.088320  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.088328  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:37.088334  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:37.088395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:37.124388  896760 cri.go:89] found id: ""
	I1208 00:39:37.124402  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.124409  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:37.124417  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:37.124474  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:37.154807  896760 cri.go:89] found id: ""
	I1208 00:39:37.154821  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.154838  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:37.154843  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:37.154912  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:37.180191  896760 cri.go:89] found id: ""
	I1208 00:39:37.180204  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.180212  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:37.180217  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:37.180279  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:37.205379  896760 cri.go:89] found id: ""
	I1208 00:39:37.205394  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.205402  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:37.205408  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:37.205487  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:37.233231  896760 cri.go:89] found id: ""
	I1208 00:39:37.233245  896760 logs.go:282] 0 containers: []
	W1208 00:39:37.233264  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:37.233271  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:37.233280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:37.297690  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:37.297709  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:37.325655  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:37.325682  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:37.385822  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:37.385841  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:37.400660  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:37.400685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:37.463113  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:37.454993   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.455632   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457344   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.457961   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:37.459076   13888 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:39.963375  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:39.974152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:39.974214  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:40.011461  896760 cri.go:89] found id: ""
	I1208 00:39:40.011477  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.011485  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:40.011492  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:40.011588  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:40.050774  896760 cri.go:89] found id: ""
	I1208 00:39:40.050789  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.050810  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:40.050819  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:40.050895  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:40.078693  896760 cri.go:89] found id: ""
	I1208 00:39:40.078712  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.078737  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:40.078743  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:40.078832  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:40.119774  896760 cri.go:89] found id: ""
	I1208 00:39:40.119787  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.119806  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:40.119812  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:40.119870  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:40.150656  896760 cri.go:89] found id: ""
	I1208 00:39:40.150682  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.150689  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:40.150694  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:40.150761  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:40.182218  896760 cri.go:89] found id: ""
	I1208 00:39:40.182233  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.182247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:40.182253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:40.182329  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:40.212756  896760 cri.go:89] found id: ""
	I1208 00:39:40.212770  896760 logs.go:282] 0 containers: []
	W1208 00:39:40.212778  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:40.212786  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:40.212796  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:40.271111  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:40.271135  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:40.286128  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:40.286144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:40.350612  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:40.342184   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.342959   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344515   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.344978   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:40.346603   13977 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:40.350622  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:40.350633  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:40.413198  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:40.413217  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:42.941473  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:42.951830  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:42.951896  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:42.977279  896760 cri.go:89] found id: ""
	I1208 00:39:42.977294  896760 logs.go:282] 0 containers: []
	W1208 00:39:42.977303  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:42.977309  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:42.977378  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:43.005862  896760 cri.go:89] found id: ""
	I1208 00:39:43.005878  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.005886  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:43.005891  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:43.006072  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:43.033594  896760 cri.go:89] found id: ""
	I1208 00:39:43.033609  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.033616  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:43.033621  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:43.033700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:43.058971  896760 cri.go:89] found id: ""
	I1208 00:39:43.058986  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.058993  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:43.058999  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:43.059056  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:43.084568  896760 cri.go:89] found id: ""
	I1208 00:39:43.084582  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.084590  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:43.084595  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:43.084657  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:43.121795  896760 cri.go:89] found id: ""
	I1208 00:39:43.121810  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.121818  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:43.121823  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:43.121884  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:43.151337  896760 cri.go:89] found id: ""
	I1208 00:39:43.151351  896760 logs.go:282] 0 containers: []
	W1208 00:39:43.151358  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:43.151365  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:43.151375  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:43.212011  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:43.212032  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:43.227510  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:43.227526  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:43.293650  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:43.284829   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.285287   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287188   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.287643   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:43.289445   14083 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:43.293672  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:43.293684  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:43.355405  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:43.355425  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:45.883665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:45.894220  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:45.894287  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:45.919120  896760 cri.go:89] found id: ""
	I1208 00:39:45.919134  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.919141  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:45.919147  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:45.919202  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:45.944078  896760 cri.go:89] found id: ""
	I1208 00:39:45.944092  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.944100  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:45.944105  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:45.944171  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:45.969419  896760 cri.go:89] found id: ""
	I1208 00:39:45.969433  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.969440  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:45.969445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:45.969504  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:45.999721  896760 cri.go:89] found id: ""
	I1208 00:39:45.999736  896760 logs.go:282] 0 containers: []
	W1208 00:39:45.999744  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:45.999749  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:45.999807  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:46.027671  896760 cri.go:89] found id: ""
	I1208 00:39:46.027685  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.027697  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:46.027705  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:46.027763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:46.053035  896760 cri.go:89] found id: ""
	I1208 00:39:46.053050  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.053058  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:46.053064  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:46.053124  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:46.077745  896760 cri.go:89] found id: ""
	I1208 00:39:46.077759  896760 logs.go:282] 0 containers: []
	W1208 00:39:46.077767  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:46.077775  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:46.077786  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:46.137068  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:46.137086  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:46.153304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:46.153320  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:46.226313  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:46.213353   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.214165   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.217428   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.218324   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:46.219648   14187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:46.226334  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:46.226345  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:46.290116  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:46.290137  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:48.819903  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:48.830265  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:48.830328  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:48.855396  896760 cri.go:89] found id: ""
	I1208 00:39:48.855411  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.855418  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:48.855423  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:48.855483  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:48.880269  896760 cri.go:89] found id: ""
	I1208 00:39:48.880282  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.880289  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:48.880294  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:48.880353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:48.904626  896760 cri.go:89] found id: ""
	I1208 00:39:48.904641  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.904648  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:48.904653  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:48.904715  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:48.930484  896760 cri.go:89] found id: ""
	I1208 00:39:48.930511  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.930519  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:48.930528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:48.930609  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:48.956159  896760 cri.go:89] found id: ""
	I1208 00:39:48.956173  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.956180  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:48.956185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:48.956243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:48.984643  896760 cri.go:89] found id: ""
	I1208 00:39:48.984657  896760 logs.go:282] 0 containers: []
	W1208 00:39:48.984664  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:48.984670  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:48.984737  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:49.012694  896760 cri.go:89] found id: ""
	I1208 00:39:49.012708  896760 logs.go:282] 0 containers: []
	W1208 00:39:49.012716  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:49.012724  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:49.012736  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:49.042898  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:49.042915  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:49.099079  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:49.099099  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:49.118877  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:49.118895  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:49.190253  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:49.181763   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.182699   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184438   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.184812   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:49.186288   14306 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:49.190263  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:49.190273  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:51.751406  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:51.761914  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:51.761973  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:51.788354  896760 cri.go:89] found id: ""
	I1208 00:39:51.788367  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.788375  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:51.788381  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:51.788441  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:51.816636  896760 cri.go:89] found id: ""
	I1208 00:39:51.816651  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.816658  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:51.816664  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:51.816735  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:51.842160  896760 cri.go:89] found id: ""
	I1208 00:39:51.842174  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.842181  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:51.842187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:51.842249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:51.867343  896760 cri.go:89] found id: ""
	I1208 00:39:51.867358  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.867365  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:51.867371  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:51.867432  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:51.891589  896760 cri.go:89] found id: ""
	I1208 00:39:51.891604  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.891611  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:51.891616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:51.891681  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:51.915982  896760 cri.go:89] found id: ""
	I1208 00:39:51.915997  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.916016  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:51.916023  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:51.916081  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:51.940386  896760 cri.go:89] found id: ""
	I1208 00:39:51.940399  896760 logs.go:282] 0 containers: []
	W1208 00:39:51.940406  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:51.940414  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:51.940424  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:51.995386  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:51.995404  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:52.011670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:52.011689  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:52.085018  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:52.076277   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.076952   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.078626   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.079304   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:52.080944   14393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:52.085029  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:52.085041  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:52.155066  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:52.155085  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.698041  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:54.708958  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:54.709024  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:54.734899  896760 cri.go:89] found id: ""
	I1208 00:39:54.734913  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.734921  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:54.734926  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:54.734985  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:54.761966  896760 cri.go:89] found id: ""
	I1208 00:39:54.761981  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.761988  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:54.761993  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:54.762052  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:54.787505  896760 cri.go:89] found id: ""
	I1208 00:39:54.787519  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.787526  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:54.787532  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:54.787595  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:54.813125  896760 cri.go:89] found id: ""
	I1208 00:39:54.813139  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.813147  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:54.813152  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:54.813212  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:54.840170  896760 cri.go:89] found id: ""
	I1208 00:39:54.840185  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.840193  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:54.840198  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:54.840269  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:54.865780  896760 cri.go:89] found id: ""
	I1208 00:39:54.865794  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.865801  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:54.865807  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:54.865867  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:54.890971  896760 cri.go:89] found id: ""
	I1208 00:39:54.890992  896760 logs.go:282] 0 containers: []
	W1208 00:39:54.891000  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:54.891007  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:54.891017  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:54.953695  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:54.953715  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:54.985753  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:54.985770  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:55.051156  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:55.051176  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:55.066530  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:55.066547  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:55.148075  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:55.138813   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.139739   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141533   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.141856   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:55.143406   14513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:39:57.649726  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:39:57.660051  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:39:57.660109  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:39:57.685992  896760 cri.go:89] found id: ""
	I1208 00:39:57.686008  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.686015  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:39:57.686022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:39:57.686165  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:39:57.711195  896760 cri.go:89] found id: ""
	I1208 00:39:57.711209  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.711216  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:39:57.711224  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:39:57.711285  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:39:57.735850  896760 cri.go:89] found id: ""
	I1208 00:39:57.735864  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.735871  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:39:57.735877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:39:57.735936  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:39:57.761018  896760 cri.go:89] found id: ""
	I1208 00:39:57.761032  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.761040  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:39:57.761045  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:39:57.761110  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:39:57.787523  896760 cri.go:89] found id: ""
	I1208 00:39:57.787537  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.787544  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:39:57.787550  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:39:57.787607  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:39:57.813621  896760 cri.go:89] found id: ""
	I1208 00:39:57.813641  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.813648  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:39:57.813654  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:39:57.813717  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:39:57.837687  896760 cri.go:89] found id: ""
	I1208 00:39:57.837700  896760 logs.go:282] 0 containers: []
	W1208 00:39:57.837707  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:39:57.837715  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:39:57.837725  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:39:57.901756  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:39:57.901780  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:39:57.931916  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:39:57.931943  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:39:57.989769  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:39:57.989791  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:39:58.005304  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:39:58.005324  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:39:58.084868  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:39:58.076761   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.077366   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.078876   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.079370   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:39:58.080995   14619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:00.590352  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:00.608394  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:00.608458  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:00.646794  896760 cri.go:89] found id: ""
	I1208 00:40:00.646810  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.646818  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:00.646825  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:00.646893  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:00.722151  896760 cri.go:89] found id: ""
	I1208 00:40:00.722167  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.722175  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:00.722180  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:00.722252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:00.751689  896760 cri.go:89] found id: ""
	I1208 00:40:00.751705  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.751713  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:00.751720  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:00.751795  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:00.782551  896760 cri.go:89] found id: ""
	I1208 00:40:00.782577  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.782586  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:00.782593  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:00.782674  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:00.813259  896760 cri.go:89] found id: ""
	I1208 00:40:00.813275  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.813282  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:00.813287  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:00.813353  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:00.843171  896760 cri.go:89] found id: ""
	I1208 00:40:00.843193  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.843201  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:00.843206  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:00.843270  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:00.872240  896760 cri.go:89] found id: ""
	I1208 00:40:00.872266  896760 logs.go:282] 0 containers: []
	W1208 00:40:00.872275  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:00.872283  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:00.872297  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:00.933096  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:00.933116  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:00.949661  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:00.949685  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:01.022088  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:01.012633   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.013181   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015065   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.015751   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:01.017482   14710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:01.022099  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:01.022112  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:01.087987  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:01.088007  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.623088  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:03.637929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:03.637992  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:03.667258  896760 cri.go:89] found id: ""
	I1208 00:40:03.667272  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.667280  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:03.667286  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:03.667347  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:03.704022  896760 cri.go:89] found id: ""
	I1208 00:40:03.704035  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.704042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:03.704048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:03.704115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:03.733401  896760 cri.go:89] found id: ""
	I1208 00:40:03.733416  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.733423  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:03.733428  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:03.733489  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:03.760028  896760 cri.go:89] found id: ""
	I1208 00:40:03.760042  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.760049  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:03.760054  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:03.760113  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:03.784849  896760 cri.go:89] found id: ""
	I1208 00:40:03.784864  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.784871  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:03.784877  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:03.784934  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:03.809615  896760 cri.go:89] found id: ""
	I1208 00:40:03.809629  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.809636  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:03.809642  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:03.809700  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:03.834857  896760 cri.go:89] found id: ""
	I1208 00:40:03.834872  896760 logs.go:282] 0 containers: []
	W1208 00:40:03.834879  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:03.834886  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:03.834896  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:03.899301  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:03.891341   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.891827   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893391   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.893830   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:03.895307   14809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:03.899312  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:03.899330  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:03.961403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:03.961422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:03.990248  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:03.990265  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:04.049257  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:04.049280  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.564731  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:06.575277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:06.575339  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:06.603640  896760 cri.go:89] found id: ""
	I1208 00:40:06.603653  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.603662  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:06.603668  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:06.603727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:06.632743  896760 cri.go:89] found id: ""
	I1208 00:40:06.632757  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.632764  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:06.632769  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:06.632830  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:06.661586  896760 cri.go:89] found id: ""
	I1208 00:40:06.661600  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.661608  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:06.661613  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:06.661675  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:06.686811  896760 cri.go:89] found id: ""
	I1208 00:40:06.686833  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.686840  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:06.686845  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:06.686905  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:06.712624  896760 cri.go:89] found id: ""
	I1208 00:40:06.712639  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.712646  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:06.712651  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:06.712710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:06.737865  896760 cri.go:89] found id: ""
	I1208 00:40:06.737878  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.737898  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:06.737903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:06.737971  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:06.763555  896760 cri.go:89] found id: ""
	I1208 00:40:06.763569  896760 logs.go:282] 0 containers: []
	W1208 00:40:06.763576  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:06.763583  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:06.763594  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:06.820256  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:06.820275  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:06.835590  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:06.835606  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:06.900244  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:06.891950   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.892370   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.893980   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.894309   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:06.895881   14921 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:06.900256  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:06.900269  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:06.964553  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:06.964573  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.497887  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:09.511442  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:09.511513  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:09.537551  896760 cri.go:89] found id: ""
	I1208 00:40:09.537566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.537573  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:09.537579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:09.537639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:09.564387  896760 cri.go:89] found id: ""
	I1208 00:40:09.564400  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.564408  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:09.564412  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:09.564471  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:09.592551  896760 cri.go:89] found id: ""
	I1208 00:40:09.592566  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.592573  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:09.592579  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:09.592638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:09.633536  896760 cri.go:89] found id: ""
	I1208 00:40:09.633553  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.633564  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:09.633572  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:09.633644  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:09.661685  896760 cri.go:89] found id: ""
	I1208 00:40:09.661700  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.661706  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:09.661711  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:09.661773  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:09.689367  896760 cri.go:89] found id: ""
	I1208 00:40:09.689382  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.689390  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:09.689396  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:09.689461  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:09.715001  896760 cri.go:89] found id: ""
	I1208 00:40:09.715025  896760 logs.go:282] 0 containers: []
	W1208 00:40:09.715033  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:09.715041  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:09.715052  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:09.743922  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:09.743939  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:09.801833  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:09.801852  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:09.817182  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:09.817199  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:09.885006  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:09.877198   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.877824   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.878892   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.879505   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:09.881097   15037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:09.885017  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:09.885028  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.453176  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:12.463998  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:12.464059  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:12.489947  896760 cri.go:89] found id: ""
	I1208 00:40:12.489961  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.489968  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:12.489974  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:12.490053  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:12.517571  896760 cri.go:89] found id: ""
	I1208 00:40:12.517586  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.517594  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:12.517601  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:12.517680  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:12.549649  896760 cri.go:89] found id: ""
	I1208 00:40:12.549671  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.549679  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:12.549685  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:12.549764  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:12.575870  896760 cri.go:89] found id: ""
	I1208 00:40:12.575891  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.575899  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:12.575903  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:12.575975  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:12.615650  896760 cri.go:89] found id: ""
	I1208 00:40:12.615664  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.615672  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:12.615677  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:12.615745  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:12.644432  896760 cri.go:89] found id: ""
	I1208 00:40:12.644446  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.644454  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:12.644460  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:12.644536  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:12.671471  896760 cri.go:89] found id: ""
	I1208 00:40:12.671485  896760 logs.go:282] 0 containers: []
	W1208 00:40:12.671492  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:12.671499  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:12.671510  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:12.728175  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:12.728195  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:12.743959  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:12.743975  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:12.816570  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:12.807966   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.808787   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.810592   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.811075   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:12.812706   15129 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:12.816580  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:12.816591  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:12.879403  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:12.879423  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:15.414366  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:15.424841  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:15.424901  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:15.450991  896760 cri.go:89] found id: ""
	I1208 00:40:15.451005  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.451012  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:15.451017  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:15.451078  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:15.477340  896760 cri.go:89] found id: ""
	I1208 00:40:15.477354  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.477361  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:15.477366  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:15.477424  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:15.503035  896760 cri.go:89] found id: ""
	I1208 00:40:15.503048  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.503055  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:15.503060  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:15.503125  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:15.527771  896760 cri.go:89] found id: ""
	I1208 00:40:15.527787  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.527794  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:15.527798  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:15.527856  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:15.556600  896760 cri.go:89] found id: ""
	I1208 00:40:15.556627  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.556634  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:15.556639  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:15.556710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:15.582706  896760 cri.go:89] found id: ""
	I1208 00:40:15.582721  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.582728  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:15.582737  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:15.582821  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:15.628093  896760 cri.go:89] found id: ""
	I1208 00:40:15.628114  896760 logs.go:282] 0 containers: []
	W1208 00:40:15.628121  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:15.628129  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:15.628144  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:15.691996  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:15.692026  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:15.707812  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:15.707830  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:15.773396  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:15.764655   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.765411   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767117   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.767651   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:15.769241   15233 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:15.773407  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:15.773418  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:15.840937  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:15.840957  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:18.375079  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:18.385866  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:18.385931  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:18.412581  896760 cri.go:89] found id: ""
	I1208 00:40:18.412596  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.412603  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:18.412609  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:18.412672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:18.443837  896760 cri.go:89] found id: ""
	I1208 00:40:18.443863  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.443871  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:18.443876  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:18.443950  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:18.470522  896760 cri.go:89] found id: ""
	I1208 00:40:18.470549  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.470557  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:18.470565  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:18.470639  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:18.500112  896760 cri.go:89] found id: ""
	I1208 00:40:18.500127  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.500136  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:18.500141  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:18.500203  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:18.528643  896760 cri.go:89] found id: ""
	I1208 00:40:18.528657  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.528666  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:18.528672  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:18.528740  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:18.556708  896760 cri.go:89] found id: ""
	I1208 00:40:18.556722  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.556729  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:18.556735  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:18.556799  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:18.586255  896760 cri.go:89] found id: ""
	I1208 00:40:18.586270  896760 logs.go:282] 0 containers: []
	W1208 00:40:18.586277  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:18.586285  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:18.586295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:18.651954  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:18.651974  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:18.668271  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:18.668288  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:18.735458  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:18.726589   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.727229   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729016   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.729638   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:18.731394   15337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:18.735469  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:18.735481  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:18.797791  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:18.797811  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.328343  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:21.339006  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:21.339068  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:21.365940  896760 cri.go:89] found id: ""
	I1208 00:40:21.365954  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.365961  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:21.365967  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:21.366028  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:21.393056  896760 cri.go:89] found id: ""
	I1208 00:40:21.393071  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.393078  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:21.393083  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:21.393147  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:21.418602  896760 cri.go:89] found id: ""
	I1208 00:40:21.418616  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.418624  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:21.418630  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:21.418689  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:21.444947  896760 cri.go:89] found id: ""
	I1208 00:40:21.444963  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.444970  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:21.444976  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:21.445037  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:21.486428  896760 cri.go:89] found id: ""
	I1208 00:40:21.486461  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.486469  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:21.486476  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:21.486537  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:21.516432  896760 cri.go:89] found id: ""
	I1208 00:40:21.516448  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.516455  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:21.516461  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:21.516527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:21.542473  896760 cri.go:89] found id: ""
	I1208 00:40:21.542488  896760 logs.go:282] 0 containers: []
	W1208 00:40:21.542501  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:21.542510  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:21.542521  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:21.558088  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:21.558105  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:21.646839  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:21.637518   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.638280   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.639952   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.640564   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:21.642225   15434 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:21.646850  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:21.646861  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:21.711182  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:21.711203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:21.739373  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:21.739391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.296477  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:24.307018  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:24.307079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:24.333480  896760 cri.go:89] found id: ""
	I1208 00:40:24.333502  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.333521  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:24.333526  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:24.333587  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:24.359023  896760 cri.go:89] found id: ""
	I1208 00:40:24.359037  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.359044  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:24.359049  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:24.359118  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:24.384337  896760 cri.go:89] found id: ""
	I1208 00:40:24.384351  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.384358  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:24.384363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:24.384425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:24.409687  896760 cri.go:89] found id: ""
	I1208 00:40:24.409702  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.409709  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:24.409714  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:24.409774  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:24.434605  896760 cri.go:89] found id: ""
	I1208 00:40:24.434620  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.434627  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:24.434633  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:24.434690  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:24.464542  896760 cri.go:89] found id: ""
	I1208 00:40:24.464556  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.464569  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:24.464575  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:24.464638  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:24.489131  896760 cri.go:89] found id: ""
	I1208 00:40:24.489145  896760 logs.go:282] 0 containers: []
	W1208 00:40:24.489152  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:24.489159  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:24.489170  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:24.544278  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:24.544298  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:24.560095  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:24.560152  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:24.637902  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:24.622271   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.625402   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.626063   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.627090   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:24.632282   15542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:24.637914  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:24.637924  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:24.706243  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:24.706262  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.237246  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:27.247681  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:27.247744  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:27.272827  896760 cri.go:89] found id: ""
	I1208 00:40:27.272841  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.272848  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:27.272854  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:27.272917  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:27.298021  896760 cri.go:89] found id: ""
	I1208 00:40:27.298035  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.298042  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:27.298048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:27.298115  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:27.322943  896760 cri.go:89] found id: ""
	I1208 00:40:27.322975  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.322983  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:27.322989  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:27.323049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:27.348507  896760 cri.go:89] found id: ""
	I1208 00:40:27.348522  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.348530  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:27.348535  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:27.348604  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:27.373824  896760 cri.go:89] found id: ""
	I1208 00:40:27.373838  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.373846  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:27.373851  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:27.373911  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:27.399388  896760 cri.go:89] found id: ""
	I1208 00:40:27.399402  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.399409  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:27.399415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:27.399481  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:27.427571  896760 cri.go:89] found id: ""
	I1208 00:40:27.427596  896760 logs.go:282] 0 containers: []
	W1208 00:40:27.427604  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:27.427612  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:27.427621  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:27.492713  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:27.492731  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:27.522269  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:27.522295  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:27.582384  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:27.582402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:27.602834  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:27.602850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:27.689958  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:27.681544   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.682073   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.683995   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.684357   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:27.685900   15665 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:30.190338  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:30.201839  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:30.201909  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:30.228924  896760 cri.go:89] found id: ""
	I1208 00:40:30.228939  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.228956  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:30.228963  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:30.229026  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:30.255337  896760 cri.go:89] found id: ""
	I1208 00:40:30.255351  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.255358  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:30.255363  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:30.255425  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:30.281566  896760 cri.go:89] found id: ""
	I1208 00:40:30.281581  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.281588  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:30.281594  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:30.281655  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:30.308175  896760 cri.go:89] found id: ""
	I1208 00:40:30.308189  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.308197  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:30.308202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:30.308282  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:30.336203  896760 cri.go:89] found id: ""
	I1208 00:40:30.336218  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.336226  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:30.336241  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:30.336302  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:30.368832  896760 cri.go:89] found id: ""
	I1208 00:40:30.368847  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.368855  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:30.368860  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:30.368940  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:30.396840  896760 cri.go:89] found id: ""
	I1208 00:40:30.396855  896760 logs.go:282] 0 containers: []
	W1208 00:40:30.396862  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:30.396870  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:30.396880  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:30.458293  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:30.458313  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:30.489792  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:30.489807  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:30.546970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:30.546989  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:30.561949  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:30.561969  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:30.648665  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:30.640064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.641064   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.642741   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.643112   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:30.644583   15764 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.148954  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:33.159678  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:33.159739  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:33.188692  896760 cri.go:89] found id: ""
	I1208 00:40:33.188707  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.188725  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:33.188731  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:33.188815  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:33.214527  896760 cri.go:89] found id: ""
	I1208 00:40:33.214542  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.214550  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:33.214555  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:33.214614  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:33.241307  896760 cri.go:89] found id: ""
	I1208 00:40:33.241323  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.241331  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:33.241336  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:33.241395  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:33.267242  896760 cri.go:89] found id: ""
	I1208 00:40:33.267257  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.267265  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:33.267271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:33.267331  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:33.293623  896760 cri.go:89] found id: ""
	I1208 00:40:33.293637  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.293645  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:33.293650  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:33.293710  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:33.319375  896760 cri.go:89] found id: ""
	I1208 00:40:33.319388  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.319395  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:33.319401  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:33.319477  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:33.345164  896760 cri.go:89] found id: ""
	I1208 00:40:33.345178  896760 logs.go:282] 0 containers: []
	W1208 00:40:33.345186  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:33.345193  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:33.345203  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:33.402766  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:33.402783  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:33.417559  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:33.417576  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:33.484831  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:33.475790   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.476662   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.478492   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.479126   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:33.480879   15857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:33.484841  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:33.484851  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:33.553499  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:33.553527  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.087539  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:36.098484  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:36.098549  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:36.123061  896760 cri.go:89] found id: ""
	I1208 00:40:36.123075  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.123083  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:36.123089  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:36.123150  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:36.152786  896760 cri.go:89] found id: ""
	I1208 00:40:36.152800  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.152807  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:36.152813  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:36.152874  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:36.179122  896760 cri.go:89] found id: ""
	I1208 00:40:36.179137  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.179144  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:36.179150  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:36.179211  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:36.205226  896760 cri.go:89] found id: ""
	I1208 00:40:36.205239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.205247  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:36.205253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:36.205311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:36.231018  896760 cri.go:89] found id: ""
	I1208 00:40:36.231033  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.231040  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:36.231046  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:36.231104  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:36.257226  896760 cri.go:89] found id: ""
	I1208 00:40:36.257239  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.257247  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:36.257253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:36.257312  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:36.282378  896760 cri.go:89] found id: ""
	I1208 00:40:36.282395  896760 logs.go:282] 0 containers: []
	W1208 00:40:36.282402  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:36.282411  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:36.282422  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:36.297365  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:36.297381  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:36.361334  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:36.352968   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.353402   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355210   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.355743   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:36.357267   15961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:36.361345  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:36.361356  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:36.425983  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:36.426003  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:36.458376  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:36.458391  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.019300  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:39.030277  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:39.030337  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:39.059011  896760 cri.go:89] found id: ""
	I1208 00:40:39.059026  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.059033  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:39.059039  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:39.059099  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:39.084787  896760 cri.go:89] found id: ""
	I1208 00:40:39.084802  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.084809  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:39.084815  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:39.084879  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:39.111166  896760 cri.go:89] found id: ""
	I1208 00:40:39.111179  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.111186  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:39.111192  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:39.111252  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:39.140388  896760 cri.go:89] found id: ""
	I1208 00:40:39.140403  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.140410  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:39.140415  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:39.140475  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:39.165040  896760 cri.go:89] found id: ""
	I1208 00:40:39.165054  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.165062  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:39.165067  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:39.165130  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:39.191099  896760 cri.go:89] found id: ""
	I1208 00:40:39.191114  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.191122  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:39.191127  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:39.191187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:39.215889  896760 cri.go:89] found id: ""
	I1208 00:40:39.215903  896760 logs.go:282] 0 containers: []
	W1208 00:40:39.215910  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:39.215918  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:39.215934  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:39.279738  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:39.279760  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:39.295091  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:39.295108  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:39.363341  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:39.354264   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.354968   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.356687   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.357285   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:39.358908   16065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:39.363363  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:39.363373  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:39.428022  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:39.428043  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:41.960665  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:41.971071  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:41.971142  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:41.997224  896760 cri.go:89] found id: ""
	I1208 00:40:41.997239  896760 logs.go:282] 0 containers: []
	W1208 00:40:41.997247  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:41.997253  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:41.997315  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:42.035665  896760 cri.go:89] found id: ""
	I1208 00:40:42.035680  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.035687  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:42.035692  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:42.035758  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:42.064088  896760 cri.go:89] found id: ""
	I1208 00:40:42.064103  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.064111  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:42.064117  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:42.064181  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:42.092740  896760 cri.go:89] found id: ""
	I1208 00:40:42.092757  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.092765  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:42.092771  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:42.092844  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:42.124291  896760 cri.go:89] found id: ""
	I1208 00:40:42.124309  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.124321  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:42.124329  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:42.124428  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:42.155416  896760 cri.go:89] found id: ""
	I1208 00:40:42.155431  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.155439  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:42.155445  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:42.155515  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:42.188921  896760 cri.go:89] found id: ""
	I1208 00:40:42.188938  896760 logs.go:282] 0 containers: []
	W1208 00:40:42.188945  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:42.188954  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:42.188965  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:42.249292  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:42.249321  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:42.266137  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:42.266155  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:42.342321  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:42.332243   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.333672   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.334366   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336262   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:42.336858   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:42.342333  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:42.342344  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:42.406583  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:42.406602  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:44.937561  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:44.948618  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:44.948679  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:44.982163  896760 cri.go:89] found id: ""
	I1208 00:40:44.982177  896760 logs.go:282] 0 containers: []
	W1208 00:40:44.982195  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:44.982202  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:44.982276  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:45.033982  896760 cri.go:89] found id: ""
	I1208 00:40:45.033999  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.034008  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:45.034014  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:45.034085  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:45.089336  896760 cri.go:89] found id: ""
	I1208 00:40:45.089353  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.089362  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:45.089368  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:45.089437  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:45.132530  896760 cri.go:89] found id: ""
	I1208 00:40:45.132547  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.132555  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:45.132561  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:45.132672  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:45.207404  896760 cri.go:89] found id: ""
	I1208 00:40:45.207423  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.207432  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:45.207438  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:45.207516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:45.247451  896760 cri.go:89] found id: ""
	I1208 00:40:45.247477  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.247486  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:45.247493  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:45.247562  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:45.291347  896760 cri.go:89] found id: ""
	I1208 00:40:45.291363  896760 logs.go:282] 0 containers: []
	W1208 00:40:45.291373  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:45.291382  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:45.291393  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:45.358718  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:45.358739  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:45.375670  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:45.375694  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:45.443052  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:45.434008   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.434889   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.436585   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.437154   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:45.438976   16272 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:45.443063  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:45.443075  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:45.507120  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:45.507142  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.037423  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:48.048528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:48.048599  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:48.078292  896760 cri.go:89] found id: ""
	I1208 00:40:48.078307  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.078314  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:48.078320  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:48.078380  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:48.103852  896760 cri.go:89] found id: ""
	I1208 00:40:48.103867  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.103874  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:48.103879  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:48.103938  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:48.129348  896760 cri.go:89] found id: ""
	I1208 00:40:48.129364  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.129371  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:48.129376  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:48.129434  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:48.154375  896760 cri.go:89] found id: ""
	I1208 00:40:48.154390  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.154397  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:48.154402  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:48.154497  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:48.180043  896760 cri.go:89] found id: ""
	I1208 00:40:48.180058  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.180065  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:48.180070  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:48.180126  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:48.208497  896760 cri.go:89] found id: ""
	I1208 00:40:48.208511  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.208518  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:48.208524  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:48.208582  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:48.236937  896760 cri.go:89] found id: ""
	I1208 00:40:48.236960  896760 logs.go:282] 0 containers: []
	W1208 00:40:48.236968  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:48.236975  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:48.236985  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:48.252020  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:48.252037  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:48.317246  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:48.308656   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.309213   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.310815   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.311307   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:48.313074   16376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:48.317257  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:48.317267  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:48.381926  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:48.381947  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:48.410384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:48.410402  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:50.965799  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:50.977456  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:50.977516  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:51.008659  896760 cri.go:89] found id: ""
	I1208 00:40:51.008677  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.008685  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:51.008691  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:51.008763  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:51.043130  896760 cri.go:89] found id: ""
	I1208 00:40:51.043144  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.043151  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:51.043157  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:51.043217  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:51.071991  896760 cri.go:89] found id: ""
	I1208 00:40:51.072014  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.072022  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:51.072028  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:51.072091  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:51.098639  896760 cri.go:89] found id: ""
	I1208 00:40:51.098654  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.098661  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:51.098667  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:51.098727  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:51.125133  896760 cri.go:89] found id: ""
	I1208 00:40:51.125147  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.125154  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:51.125159  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:51.125220  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:51.152232  896760 cri.go:89] found id: ""
	I1208 00:40:51.152247  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.152255  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:51.152271  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:51.152333  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:51.181299  896760 cri.go:89] found id: ""
	I1208 00:40:51.181313  896760 logs.go:282] 0 containers: []
	W1208 00:40:51.181321  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:51.181329  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:51.181339  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:51.243933  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:51.243955  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:51.272384  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:51.272400  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:51.334024  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:51.334042  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:51.349155  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:51.349172  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:51.419857  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:51.411268   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.412299   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.413252   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.414160   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:51.415792   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:53.920516  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:53.931349  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:53.931410  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:53.960789  896760 cri.go:89] found id: ""
	I1208 00:40:53.960805  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.960816  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:53.960821  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:53.960887  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:53.991352  896760 cri.go:89] found id: ""
	I1208 00:40:53.991368  896760 logs.go:282] 0 containers: []
	W1208 00:40:53.991376  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:53.991382  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:53.991452  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:54.024088  896760 cri.go:89] found id: ""
	I1208 00:40:54.024103  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.024117  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:54.024123  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:54.024187  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:54.051247  896760 cri.go:89] found id: ""
	I1208 00:40:54.051262  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.051269  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:54.051274  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:54.051335  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:54.077953  896760 cri.go:89] found id: ""
	I1208 00:40:54.077968  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.077975  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:54.077985  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:54.078051  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:54.104672  896760 cri.go:89] found id: ""
	I1208 00:40:54.104686  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.104693  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:54.104699  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:54.104757  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:54.129936  896760 cri.go:89] found id: ""
	I1208 00:40:54.129950  896760 logs.go:282] 0 containers: []
	W1208 00:40:54.129957  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:54.129965  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:54.129976  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:54.190590  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:54.190610  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:54.206141  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:54.206158  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:54.274636  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:54.265398   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.266260   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268064   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.268746   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:54.270305   16588 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:54.274647  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:54.274658  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:54.343673  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:54.343693  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:56.875691  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:56.887842  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:56.887906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:56.916158  896760 cri.go:89] found id: ""
	I1208 00:40:56.916172  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.916179  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:56.916185  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:56.916243  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:56.940916  896760 cri.go:89] found id: ""
	I1208 00:40:56.940930  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.940937  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:56.940942  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:56.941002  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:56.964346  896760 cri.go:89] found id: ""
	I1208 00:40:56.964361  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.964368  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:56.964373  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:56.964431  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:56.989502  896760 cri.go:89] found id: ""
	I1208 00:40:56.989516  896760 logs.go:282] 0 containers: []
	W1208 00:40:56.989523  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:56.989528  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:56.989590  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:57.017437  896760 cri.go:89] found id: ""
	I1208 00:40:57.017452  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.017459  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:57.017465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:57.017527  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:57.044860  896760 cri.go:89] found id: ""
	I1208 00:40:57.044873  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.044880  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:57.044886  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:57.044943  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:40:57.070028  896760 cri.go:89] found id: ""
	I1208 00:40:57.070043  896760 logs.go:282] 0 containers: []
	W1208 00:40:57.070050  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:40:57.070058  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:40:57.070069  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:40:57.133938  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:40:57.133960  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:40:57.163813  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:40:57.163828  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:40:57.219970  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:40:57.219990  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:40:57.234793  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:40:57.234810  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:40:57.297123  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:40:57.289483   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.289899   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291403   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.291716   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:40:57.293180   16705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:40:59.797409  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:40:59.807447  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:40:59.807521  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:40:59.831111  896760 cri.go:89] found id: ""
	I1208 00:40:59.831126  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.831139  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:40:59.831145  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:40:59.831204  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:40:59.862164  896760 cri.go:89] found id: ""
	I1208 00:40:59.862178  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.862185  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:40:59.862190  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:40:59.862245  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:40:59.913907  896760 cri.go:89] found id: ""
	I1208 00:40:59.913921  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.913928  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:40:59.913933  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:40:59.913990  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:40:59.938219  896760 cri.go:89] found id: ""
	I1208 00:40:59.938235  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.938242  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:40:59.938247  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:40:59.938309  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:40:59.965447  896760 cri.go:89] found id: ""
	I1208 00:40:59.965460  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.965479  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:40:59.965485  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:40:59.965551  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:40:59.989806  896760 cri.go:89] found id: ""
	I1208 00:40:59.989820  896760 logs.go:282] 0 containers: []
	W1208 00:40:59.989827  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:40:59.989833  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:40:59.989891  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:00.115094  896760 cri.go:89] found id: ""
	I1208 00:41:00.115110  896760 logs.go:282] 0 containers: []
	W1208 00:41:00.115118  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:00.115126  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:00.115138  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:00.211003  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:00.211027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:00.261522  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:00.261543  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:00.334293  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:00.334316  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:00.381440  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:00.381465  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:00.482780  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:00.472456   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.473594   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.474550   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476576   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:00.476966   16808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:02.983027  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:02.993616  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:02.993677  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:03.021098  896760 cri.go:89] found id: ""
	I1208 00:41:03.021114  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.021122  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:03.021128  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:03.021189  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:03.047499  896760 cri.go:89] found id: ""
	I1208 00:41:03.047521  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.047528  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:03.047534  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:03.047594  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:03.072719  896760 cri.go:89] found id: ""
	I1208 00:41:03.072749  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.072757  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:03.072762  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:03.072841  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:03.098912  896760 cri.go:89] found id: ""
	I1208 00:41:03.098927  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.098934  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:03.098939  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:03.099001  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:03.125225  896760 cri.go:89] found id: ""
	I1208 00:41:03.125239  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.125247  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:03.125252  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:03.125311  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:03.151371  896760 cri.go:89] found id: ""
	I1208 00:41:03.151384  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.151392  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:03.151397  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:03.151457  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:03.176410  896760 cri.go:89] found id: ""
	I1208 00:41:03.176424  896760 logs.go:282] 0 containers: []
	W1208 00:41:03.176432  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:03.176439  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:03.176450  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:03.231731  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:03.231750  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:03.246857  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:03.246874  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:03.313632  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:03.304930   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.305752   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307366   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.307927   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:03.309517   16901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:03.313651  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:03.313662  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:03.381170  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:03.381190  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:05.911707  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:05.922187  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:05.922249  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:05.946678  896760 cri.go:89] found id: ""
	I1208 00:41:05.946692  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.946698  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:05.946704  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:05.946760  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:05.971332  896760 cri.go:89] found id: ""
	I1208 00:41:05.971344  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.971351  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:05.971357  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:05.971418  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:05.996173  896760 cri.go:89] found id: ""
	I1208 00:41:05.996187  896760 logs.go:282] 0 containers: []
	W1208 00:41:05.996194  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:05.996200  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:05.996257  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:06.029475  896760 cri.go:89] found id: ""
	I1208 00:41:06.029489  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.029497  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:06.029502  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:06.029578  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:06.058996  896760 cri.go:89] found id: ""
	I1208 00:41:06.059009  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.059017  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:06.059022  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:06.059079  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:06.083207  896760 cri.go:89] found id: ""
	I1208 00:41:06.083220  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.083227  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:06.083233  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:06.083301  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:06.108817  896760 cri.go:89] found id: ""
	I1208 00:41:06.108831  896760 logs.go:282] 0 containers: []
	W1208 00:41:06.108848  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:06.108856  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:06.108867  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:06.124009  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:06.124027  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:06.189487  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:06.180763   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.181346   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183067   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.183548   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:06.185619   17002 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:06.189497  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:06.189509  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:06.253352  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:06.253372  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:06.285932  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:06.285948  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:08.842570  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:08.854529  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:41:08.854589  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:41:08.884339  896760 cri.go:89] found id: ""
	I1208 00:41:08.884355  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.884362  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:41:08.884367  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:41:08.884427  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:41:08.914891  896760 cri.go:89] found id: ""
	I1208 00:41:08.914905  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.914924  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:41:08.914929  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:41:08.914998  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:41:08.941436  896760 cri.go:89] found id: ""
	I1208 00:41:08.941452  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.941459  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:41:08.941465  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:41:08.941535  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:41:08.966802  896760 cri.go:89] found id: ""
	I1208 00:41:08.966816  896760 logs.go:282] 0 containers: []
	W1208 00:41:08.966823  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:41:08.966829  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:41:08.966890  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:41:09.002946  896760 cri.go:89] found id: ""
	I1208 00:41:09.002962  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.002971  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:41:09.002977  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:41:09.003049  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:41:09.031184  896760 cri.go:89] found id: ""
	I1208 00:41:09.031199  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.031207  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:41:09.031213  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:41:09.031288  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:41:09.055946  896760 cri.go:89] found id: ""
	I1208 00:41:09.055971  896760 logs.go:282] 0 containers: []
	W1208 00:41:09.055979  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:41:09.055987  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:41:09.055997  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:41:09.121830  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:41:09.121850  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:41:09.150682  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:41:09.150700  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:41:09.214609  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:41:09.214636  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:41:09.230018  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:41:09.230035  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:41:09.298095  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:41:09.289090   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.289949   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.291555   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.292097   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:41:09.293719   17126 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 00:41:11.798922  896760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:41:11.809212  896760 kubeadm.go:602] duration metric: took 4m1.466236852s to restartPrimaryControlPlane
	W1208 00:41:11.809278  896760 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 00:41:11.810440  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:41:12.224260  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:41:12.238539  896760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 00:41:12.247058  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:41:12.247114  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:41:12.255525  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:41:12.255534  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:41:12.255586  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:41:12.263892  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:41:12.263953  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:41:12.271955  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:41:12.280091  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:41:12.280149  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:41:12.288143  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.296120  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:41:12.296196  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:41:12.303946  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:41:12.312368  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:41:12.312423  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:41:12.320463  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:41:12.364373  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:41:12.364695  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:41:12.438406  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:41:12.438492  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:41:12.438531  896760 kubeadm.go:319] OS: Linux
	I1208 00:41:12.438577  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:41:12.438625  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:41:12.438672  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:41:12.438719  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:41:12.438766  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:41:12.438813  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:41:12.438857  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:41:12.438904  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:41:12.438949  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:41:12.514836  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:41:12.514942  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:41:12.515034  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:41:12.521560  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:41:12.527008  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:41:12.527099  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:41:12.527164  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:41:12.527241  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:41:12.527300  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:41:12.527369  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:41:12.527423  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:41:12.527485  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:41:12.527544  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:41:12.527617  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:41:12.527688  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:41:12.527724  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:41:12.527778  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:41:13.245010  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:41:13.299392  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:41:13.614595  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:41:13.963710  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:41:14.175279  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:41:14.176043  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:41:14.180186  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:41:14.183629  896760 out.go:252]   - Booting up control plane ...
	I1208 00:41:14.183729  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:41:14.183806  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:41:14.184436  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:41:14.204887  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:41:14.204990  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:41:14.213421  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:41:14.213704  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:41:14.213908  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:41:14.352082  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:41:14.352289  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:45:14.352397  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00008019s
	I1208 00:45:14.352432  896760 kubeadm.go:319] 
	I1208 00:45:14.352488  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:45:14.352520  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:45:14.352633  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:45:14.352639  896760 kubeadm.go:319] 
	I1208 00:45:14.352742  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:45:14.352774  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:45:14.352803  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:45:14.352807  896760 kubeadm.go:319] 
	I1208 00:45:14.356965  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:45:14.357429  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:45:14.357540  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:45:14.357802  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 00:45:14.357807  896760 kubeadm.go:319] 
	I1208 00:45:14.357875  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 00:45:14.357995  896760 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00008019s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 00:45:14.358087  896760 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 00:45:14.770086  896760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:45:14.783732  896760 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 00:45:14.783788  896760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 00:45:14.791646  896760 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 00:45:14.791657  896760 kubeadm.go:158] found existing configuration files:
	
	I1208 00:45:14.791710  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1208 00:45:14.799512  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 00:45:14.799569  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 00:45:14.807303  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1208 00:45:14.815223  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 00:45:14.815280  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 00:45:14.822916  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.831219  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 00:45:14.831274  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 00:45:14.838751  896760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1208 00:45:14.846479  896760 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 00:45:14.846535  896760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 00:45:14.855105  896760 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 00:45:14.892727  896760 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 00:45:14.893019  896760 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 00:45:14.958827  896760 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 00:45:14.958888  896760 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 00:45:14.958921  896760 kubeadm.go:319] OS: Linux
	I1208 00:45:14.958963  896760 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 00:45:14.959008  896760 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 00:45:14.959052  896760 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 00:45:14.959097  896760 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 00:45:14.959143  896760 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 00:45:14.959192  896760 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 00:45:14.959234  896760 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 00:45:14.959279  896760 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 00:45:14.959321  896760 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 00:45:15.063986  896760 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 00:45:15.064091  896760 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 00:45:15.064182  896760 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 00:45:15.072119  896760 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 00:45:15.073836  896760 out.go:252]   - Generating certificates and keys ...
	I1208 00:45:15.073929  896760 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 00:45:15.073997  896760 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 00:45:15.074078  896760 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 00:45:15.074847  896760 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 00:45:15.074919  896760 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 00:45:15.074970  896760 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 00:45:15.075029  896760 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 00:45:15.075086  896760 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 00:45:15.075260  896760 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 00:45:15.075466  896760 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 00:45:15.075788  896760 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 00:45:15.075847  896760 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 00:45:15.207541  896760 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 00:45:15.419182  896760 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 00:45:15.708081  896760 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 00:45:15.925468  896760 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 00:45:16.152957  896760 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 00:45:16.153669  896760 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 00:45:16.156472  896760 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 00:45:16.157817  896760 out.go:252]   - Booting up control plane ...
	I1208 00:45:16.157909  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 00:45:16.157987  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 00:45:16.159025  896760 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 00:45:16.179954  896760 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 00:45:16.180052  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 00:45:16.189229  896760 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 00:45:16.190665  896760 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 00:45:16.190709  896760 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 00:45:16.336970  896760 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 00:45:16.337083  896760 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 00:49:16.337272  896760 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305556s
	I1208 00:49:16.337296  896760 kubeadm.go:319] 
	I1208 00:49:16.337409  896760 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 00:49:16.337518  896760 kubeadm.go:319] 	- The kubelet is not running
	I1208 00:49:16.337839  896760 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 00:49:16.337849  896760 kubeadm.go:319] 
	I1208 00:49:16.338164  896760 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 00:49:16.338221  896760 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 00:49:16.338281  896760 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 00:49:16.338285  896760 kubeadm.go:319] 
	I1208 00:49:16.344611  896760 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 00:49:16.345152  896760 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 00:49:16.345268  896760 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 00:49:16.345632  896760 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 00:49:16.345641  896760 kubeadm.go:319] 
	I1208 00:49:16.345722  896760 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 00:49:16.345780  896760 kubeadm.go:403] duration metric: took 12m6.045651138s to StartCluster
	I1208 00:49:16.345820  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 00:49:16.345897  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 00:49:16.378062  896760 cri.go:89] found id: ""
	I1208 00:49:16.378080  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.378088  896760 logs.go:284] No container was found matching "kube-apiserver"
	I1208 00:49:16.378094  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 00:49:16.378167  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 00:49:16.414010  896760 cri.go:89] found id: ""
	I1208 00:49:16.414024  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.414043  896760 logs.go:284] No container was found matching "etcd"
	I1208 00:49:16.414048  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 00:49:16.414116  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 00:49:16.441708  896760 cri.go:89] found id: ""
	I1208 00:49:16.441732  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.441739  896760 logs.go:284] No container was found matching "coredns"
	I1208 00:49:16.441745  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 00:49:16.441816  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 00:49:16.469812  896760 cri.go:89] found id: ""
	I1208 00:49:16.469826  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.469833  896760 logs.go:284] No container was found matching "kube-scheduler"
	I1208 00:49:16.469848  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 00:49:16.469906  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 00:49:16.495155  896760 cri.go:89] found id: ""
	I1208 00:49:16.495170  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.495177  896760 logs.go:284] No container was found matching "kube-proxy"
	I1208 00:49:16.495183  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 00:49:16.495242  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 00:49:16.522141  896760 cri.go:89] found id: ""
	I1208 00:49:16.522155  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.522163  896760 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 00:49:16.522168  896760 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 00:49:16.522227  896760 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 00:49:16.551643  896760 cri.go:89] found id: ""
	I1208 00:49:16.551656  896760 logs.go:282] 0 containers: []
	W1208 00:49:16.551663  896760 logs.go:284] No container was found matching "kindnet"
	I1208 00:49:16.551671  896760 logs.go:123] Gathering logs for containerd ...
	I1208 00:49:16.551681  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 00:49:16.614342  896760 logs.go:123] Gathering logs for container status ...
	I1208 00:49:16.614362  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 00:49:16.644124  896760 logs.go:123] Gathering logs for kubelet ...
	I1208 00:49:16.644140  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 00:49:16.703646  896760 logs.go:123] Gathering logs for dmesg ...
	I1208 00:49:16.703665  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 00:49:16.718513  896760 logs.go:123] Gathering logs for describe nodes ...
	I1208 00:49:16.718530  896760 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 00:49:16.782371  896760 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 00:49:16.773678   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.774481   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776073   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.776375   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:49:16.777890   20953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1208 00:49:16.782383  896760 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 00:49:16.782409  896760 out.go:285] * 
	W1208 00:49:16.782515  896760 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.782535  896760 out.go:285] * 
	W1208 00:49:16.784660  896760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 00:49:16.789524  896760 out.go:203] 
	W1208 00:49:16.792367  896760 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305556s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 00:49:16.792413  896760 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 00:49:16.792436  896760 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 00:49:16.795713  896760 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919395045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919406500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919452219Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919473840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919487707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919499637Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919508720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919528249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919545578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919576815Z" level=info msg="Connect containerd service"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.919974424Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.920657812Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935258404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935352461Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935383608Z" level=info msg="Start subscribing containerd event"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.935425134Z" level=info msg="Start recovering state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981163284Z" level=info msg="Start event monitor"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981372805Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981441769Z" level=info msg="Start streaming server"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981512023Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981572914Z" level=info msg="runtime interface starting up..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981643085Z" level=info msg="starting plugins..."
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981710277Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 00:37:08 functional-386544 containerd[9748]: time="2025-12-08T00:37:08.981908794Z" level=info msg="containerd successfully booted in 0.086733s"
	Dec 08 00:37:08 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:51:14.238192   22457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:14.238958   22457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:14.240737   22457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:14.241319   22457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:14.243036   22457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:51:14 up  5:33,  0 user,  load average: 0.34, 0.24, 0.55
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:51:11 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:11 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 08 00:51:11 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:11 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:11 functional-386544 kubelet[22341]: E1208 00:51:11.886364   22341 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:11 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:11 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:12 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 08 00:51:12 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:12 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:12 functional-386544 kubelet[22346]: E1208 00:51:12.640007   22346 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:12 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:12 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:13 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 08 00:51:13 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:13 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:13 functional-386544 kubelet[22365]: E1208 00:51:13.361631   22365 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:13 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:13 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:14 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 08 00:51:14 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:14 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:14 functional-386544 kubelet[22438]: E1208 00:51:14.163856   22438 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:14 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:14 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (373.393484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1208 00:49:30.128039  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:49:35.133957  846711 retry.go:31] will retry after 4.026714747s: Temporary Error: Get "http://10.100.224.0": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:49:49.162642  846711 retry.go:31] will retry after 4.26002268s: Temporary Error: Get "http://10.100.224.0": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:50:03.423783  846711 retry.go:31] will retry after 7.549498463s: Temporary Error: Get "http://10.100.224.0": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:50:20.975113  846711 retry.go:31] will retry after 13.290910102s: Temporary Error: Get "http://10.100.224.0": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1208 00:50:44.267631  846711 retry.go:31] will retry after 18.214307997s: Temporary Error: Get "http://10.100.224.0": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1208 00:51:28.223698  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1208 00:52:33.202578  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (306.404373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (343.381565ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image save kicbase/echo-server:functional-386544 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image rm kicbase/echo-server:functional-386544 --alsologtostderr                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image save --daemon kicbase/echo-server:functional-386544 --alsologtostderr                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /etc/ssl/certs/846711.pem                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /usr/share/ca-certificates/846711.pem                                                                                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /etc/ssl/certs/8467112.pem                                                                                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /usr/share/ca-certificates/8467112.pem                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh sudo cat /etc/test/nested/copy/846711/hosts                                                                                               │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls --format short --alsologtostderr                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls --format yaml --alsologtostderr                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh            │ functional-386544 ssh pgrep buildkitd                                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ image          │ functional-386544 image build -t localhost/my-image:functional-386544 testdata/build --alsologtostderr                                                          │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls --format json --alsologtostderr                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image          │ functional-386544 image ls --format table --alsologtostderr                                                                                                     │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ update-context │ functional-386544 update-context --alsologtostderr -v=2                                                                                                         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ update-context │ functional-386544 update-context --alsologtostderr -v=2                                                                                                         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ update-context │ functional-386544 update-context --alsologtostderr -v=2                                                                                                         │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:51:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:51:29.270661  913965 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:51:29.270858  913965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.270877  913965 out.go:374] Setting ErrFile to fd 2...
	I1208 00:51:29.270883  913965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.271187  913965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:51:29.271612  913965 out.go:368] Setting JSON to false
	I1208 00:51:29.272534  913965 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":20042,"bootTime":1765135047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:51:29.272617  913965 start.go:143] virtualization:  
	I1208 00:51:29.275838  913965 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:51:29.279436  913965 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:51:29.279577  913965 notify.go:221] Checking for updates...
	I1208 00:51:29.285227  913965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:51:29.288262  913965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:51:29.291134  913965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:51:29.293999  913965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:51:29.296847  913965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:51:29.300084  913965 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:51:29.300777  913965 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:51:29.325912  913965 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:51:29.326034  913965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.396704  913965 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.386909367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.396816  913965 docker.go:319] overlay module found
	I1208 00:51:29.399947  913965 out.go:179] * Using the docker driver based on existing profile
	I1208 00:51:29.402712  913965 start.go:309] selected driver: docker
	I1208 00:51:29.402732  913965 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.402834  913965 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:51:29.402950  913965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.458342  913965 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.448170088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.458809  913965 cni.go:84] Creating CNI manager for ""
	I1208 00:51:29.458888  913965 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:51:29.458933  913965 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.461944  913965 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.374212376Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.374594640Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.428859035Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.431238153Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.433347442Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.443442822Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\" returns successfully"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.676662325Z" level=info msg="No images store for sha256:78dfebaee79f5a0a3952743e65335ded82843184c05c5307bb924e360fb11708"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.678886544Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.688477213Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.689122003Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.499790938Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\""
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.502170080Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.505436254Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.513199754Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\" returns successfully"
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.199508860Z" level=info msg="No images store for sha256:c084d52ed37e8e7a8bea071d83cf32d87a0515129f61673c9a09f4d19a04e6e4"
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.201766401Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.208900701Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.210001676Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.395839266Z" level=info msg="connecting to shim 00zyoi1sqtguzibysu7s5zk8r" address="unix:///run/containerd/s/ce9eb6921f925f659c204332f48d2d8e923f6c669b6866c92e34acc19a00fc5e" namespace=k8s.io protocol=ttrpc version=3
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.471438782Z" level=info msg="shim disconnected" id=00zyoi1sqtguzibysu7s5zk8r namespace=k8s.io
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.471610631Z" level=info msg="cleaning up after shim disconnected" id=00zyoi1sqtguzibysu7s5zk8r namespace=k8s.io
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.471683527Z" level=info msg="cleaning up dead shim" id=00zyoi1sqtguzibysu7s5zk8r namespace=k8s.io
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.765631477Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-386544\""
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.771663883Z" level=info msg="ImageCreate event name:\"sha256:d578e88641f22895256350e9a0edd01255442515ed97b7121568849cba4887d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:45 functional-386544 containerd[9748]: time="2025-12-08T00:51:45.772057380Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:53:26.798023   25146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:53:26.798696   25146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:53:26.800399   25146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:53:26.801201   25146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:53:26.803128   25146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:53:26 up  5:35,  0 user,  load average: 0.09, 0.21, 0.50
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:53:23 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:23 functional-386544 kubelet[25013]: E1208 00:53:23.890319   25013 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:53:23 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:53:23 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:53:24 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 08 00:53:24 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:24 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:24 functional-386544 kubelet[25019]: E1208 00:53:24.645310   25019 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:53:24 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:53:24 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:53:25 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 08 00:53:25 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:25 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:25 functional-386544 kubelet[25025]: E1208 00:53:25.391172   25025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:53:25 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:53:25 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:53:26 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 08 00:53:26 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:26 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:26 functional-386544 kubelet[25057]: E1208 00:53:26.169269   25057 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:53:26 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:53:26 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:53:26 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 654.
	Dec 08 00:53:26 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:53:26 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (330.920599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-386544 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-386544 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (60.410817ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-386544 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	        "Created": "2025-12-08T00:22:27.490172837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 885529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T00:22:27.576231077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
	        "LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
	        "Name": "/functional-386544",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-386544:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-386544",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
	                "LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-386544",
	                "Source": "/var/lib/docker/volumes/functional-386544/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-386544",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-386544",
	                "name.minikube.sigs.k8s.io": "functional-386544",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
	            "SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-386544": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:66:b2:62:5b:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
	                    "EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-386544",
	                        "fc0795925cb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 2 (326.274275ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount2 --alsologtostderr -v=1                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ mount     │ -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount3 --alsologtostderr -v=1                            │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh findmnt -T /mount1                                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh findmnt -T /mount2                                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh findmnt -T /mount3                                                                                                                        │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ mount     │ -p functional-386544 --kill=true                                                                                                                                │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ start     │ -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ start     │ -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ start     │ -p functional-386544 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-386544 --alsologtostderr -v=1                                                                                                  │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ ssh       │ functional-386544 ssh sudo systemctl is-active docker                                                                                                           │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ ssh       │ functional-386544 ssh sudo systemctl is-active crio                                                                                                             │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │                     │
	│ image     │ functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image save kicbase/echo-server:functional-386544 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image rm kicbase/echo-server:functional-386544 --alsologtostderr                                                                              │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image ls                                                                                                                                      │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	│ image     │ functional-386544 image save --daemon kicbase/echo-server:functional-386544 --alsologtostderr                                                                   │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:51 UTC │ 08 Dec 25 00:51 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:51:29
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:51:29.270661  913965 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:51:29.270858  913965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.270877  913965 out.go:374] Setting ErrFile to fd 2...
	I1208 00:51:29.270883  913965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.271187  913965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:51:29.271612  913965 out.go:368] Setting JSON to false
	I1208 00:51:29.272534  913965 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":20042,"bootTime":1765135047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:51:29.272617  913965 start.go:143] virtualization:  
	I1208 00:51:29.275838  913965 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:51:29.279436  913965 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:51:29.279577  913965 notify.go:221] Checking for updates...
	I1208 00:51:29.285227  913965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:51:29.288262  913965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:51:29.291134  913965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:51:29.293999  913965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:51:29.296847  913965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:51:29.300084  913965 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:51:29.300777  913965 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:51:29.325912  913965 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:51:29.326034  913965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.396704  913965 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.386909367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.396816  913965 docker.go:319] overlay module found
	I1208 00:51:29.399947  913965 out.go:179] * Using the docker driver based on existing profile
	I1208 00:51:29.402712  913965 start.go:309] selected driver: docker
	I1208 00:51:29.402732  913965 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.402834  913965 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:51:29.402950  913965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.458342  913965 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.448170088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.458809  913965 cni.go:84] Creating CNI manager for ""
	I1208 00:51:29.458888  913965 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:51:29.458933  913965 start.go:353] cluster config:
	{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.461944  913965 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 00:51:33 functional-386544 containerd[9748]: time="2025-12-08T00:51:33.272333001Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.096670407Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\""
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.099741880Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.103361379Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.111164568Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\" returns successfully"
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.355385462Z" level=info msg="No images store for sha256:78dfebaee79f5a0a3952743e65335ded82843184c05c5307bb924e360fb11708"
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.357982903Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.374212376Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:34 functional-386544 containerd[9748]: time="2025-12-08T00:51:34.374594640Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.428859035Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.431238153Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.433347442Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.443442822Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\" returns successfully"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.676662325Z" level=info msg="No images store for sha256:78dfebaee79f5a0a3952743e65335ded82843184c05c5307bb924e360fb11708"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.678886544Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.688477213Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:35 functional-386544 containerd[9748]: time="2025-12-08T00:51:35.689122003Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.499790938Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\""
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.502170080Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.505436254Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 08 00:51:36 functional-386544 containerd[9748]: time="2025-12-08T00:51:36.513199754Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-386544\" returns successfully"
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.199508860Z" level=info msg="No images store for sha256:c084d52ed37e8e7a8bea071d83cf32d87a0515129f61673c9a09f4d19a04e6e4"
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.201766401Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-386544\""
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.208900701Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 00:51:37 functional-386544 containerd[9748]: time="2025-12-08T00:51:37.210001676Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-386544\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 00:51:38.873072   23845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:38.873938   23845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:38.875518   23845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:38.875955   23845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1208 00:51:38.879171   23845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 00:51:38 up  5:34,  0 user,  load average: 0.56, 0.30, 0.56
	Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 00:51:35 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:36 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 507.
	Dec 08 00:51:36 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:36 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:36 functional-386544 kubelet[23645]: E1208 00:51:36.663702   23645 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:36 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:36 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:37 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 508.
	Dec 08 00:51:37 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:37 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:37 functional-386544 kubelet[23695]: E1208 00:51:37.390346   23695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:37 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:37 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:38 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 509.
	Dec 08 00:51:38 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:38 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:38 functional-386544 kubelet[23749]: E1208 00:51:38.146268   23749 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:38 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:38 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 00:51:38 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 510.
	Dec 08 00:51:38 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:38 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 00:51:38 functional-386544 kubelet[23849]: E1208 00:51:38.911330   23849 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 00:51:38 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 00:51:38 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 2 (328.404094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1208 00:49:24.582129  909742 out.go:360] Setting OutFile to fd 1 ...
I1208 00:49:24.582504  909742 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:49:24.582513  909742 out.go:374] Setting ErrFile to fd 2...
I1208 00:49:24.582519  909742 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:49:24.582780  909742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:49:24.583045  909742 mustload.go:66] Loading cluster: functional-386544
I1208 00:49:24.583476  909742 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:49:24.583931  909742 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:49:24.611067  909742 host.go:66] Checking if "functional-386544" exists ...
I1208 00:49:24.611404  909742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:49:24.737822  909742 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:49:24.726624301 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:49:24.737960  909742 api_server.go:166] Checking apiserver status ...
I1208 00:49:24.738019  909742 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1208 00:49:24.738066  909742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:49:24.781826  909742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
W1208 00:49:24.903952  909742 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1208 00:49:24.907441  909742 out.go:179] * The control-plane node functional-386544 apiserver is not running: (state=Stopped)
I1208 00:49:24.912110  909742 out.go:179]   To start a cluster, run: "minikube start -p functional-386544"

                                                
                                                
stdout: * The control-plane node functional-386544 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-386544"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 909743: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-386544 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-386544 apply -f testdata/testsvc.yaml: exit status 1 (103.758438ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-386544 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (107.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.100.224.0": Temporary Error: Get "http://10.100.224.0": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-386544 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-386544 get svc nginx-svc: exit status 1 (56.373812ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-386544 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (107.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-386544 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-386544 create deployment hello-node --image kicbase/echo-server: exit status 1 (61.593976ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-386544 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 service list: exit status 103 (267.534314ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-386544 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-386544"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-386544 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-386544 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-386544\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 service list -o json: exit status 103 (275.434136ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-386544 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-386544"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-386544 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 service --namespace=default --https --url hello-node: exit status 103 (270.490766ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-386544 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-386544"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-386544 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 service hello-node --url --format={{.IP}}: exit status 103 (261.760356ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-386544 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-386544"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-386544 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-386544 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-386544\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 service hello-node --url: exit status 103 (282.08098ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-386544 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-386544"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-386544 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-386544 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-386544"
functional_test.go:1579: failed to parse "* The control-plane node functional-386544 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-386544\"": parse "* The control-plane node functional-386544 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-386544\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765155080131625029" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765155080131625029" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765155080131625029" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001/test-1765155080131625029
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.538047ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:51:20.486427  846711 retry.go:31] will retry after 503.811811ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 00:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 00:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 00:51 test-1765155080131625029
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh cat /mount-9p/test-1765155080131625029
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-386544 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-386544 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.937886ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-386544 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (283.799454ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=34639)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  8 00:51 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  8 00:51 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  8 00:51 test-1765155080131625029
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-386544 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:34639
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001:/mount-9p --alsologtostderr -v=1] stderr:
I1208 00:51:20.207432  912090 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:20.207647  912090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:20.207655  912090 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:20.207661  912090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:20.207932  912090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:20.208203  912090 mustload.go:66] Loading cluster: functional-386544
I1208 00:51:20.208612  912090 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:20.209181  912090 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:20.227765  912090 host.go:66] Checking if "functional-386544" exists ...
I1208 00:51:20.228134  912090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:51:20.324630  912090 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:20.314612724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:51:20.324788  912090 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1208 00:51:20.352004  912090 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001 into VM as /mount-9p ...
I1208 00:51:20.355133  912090 out.go:179]   - Mount type:   9p
I1208 00:51:20.358131  912090 out.go:179]   - User ID:      docker
I1208 00:51:20.360999  912090 out.go:179]   - Group ID:     docker
I1208 00:51:20.363952  912090 out.go:179]   - Version:      9p2000.L
I1208 00:51:20.367147  912090 out.go:179]   - Message Size: 262144
I1208 00:51:20.370348  912090 out.go:179]   - Options:      map[]
I1208 00:51:20.374582  912090 out.go:179]   - Bind Address: 192.168.49.1:34639
I1208 00:51:20.377406  912090 out.go:179] * Userspace file server: 
I1208 00:51:20.377735  912090 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1208 00:51:20.377826  912090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:20.400513  912090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:51:20.505270  912090 mount.go:180] unmount for /mount-9p ran successfully
I1208 00:51:20.505296  912090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1208 00:51:20.513934  912090 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=34639,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1208 00:51:20.524378  912090 main.go:127] stdlog: ufs.go:141 connected
I1208 00:51:20.524540  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tversion tag 65535 msize 262144 version '9P2000.L'
I1208 00:51:20.524580  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rversion tag 65535 msize 262144 version '9P2000'
I1208 00:51:20.524809  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1208 00:51:20.524865  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rattach tag 0 aqid (44340 fb7107c1 'd')
I1208 00:51:20.526579  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 0
I1208 00:51:20.526669  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44340 fb7107c1 'd') m d775 at 0 mt 1765155080 l 4096 t 0 d 0 ext )
I1208 00:51:20.535384  912090 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/.mount-process: {Name:mk111f58a99259e3d9811f765582fe28c8b5865b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:51:20.535632  912090 mount.go:105] mount successful: ""
I1208 00:51:20.539090  912090 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2783907582/001 to /mount-9p
I1208 00:51:20.542086  912090 out.go:203] 
I1208 00:51:20.544999  912090 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1208 00:51:21.519157  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 0
I1208 00:51:21.519242  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44340 fb7107c1 'd') m d775 at 0 mt 1765155080 l 4096 t 0 d 0 ext )
I1208 00:51:21.519621  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 1 
I1208 00:51:21.519659  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 
I1208 00:51:21.519800  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Topen tag 0 fid 1 mode 0
I1208 00:51:21.519851  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Ropen tag 0 qid (44340 fb7107c1 'd') iounit 0
I1208 00:51:21.519975  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 0
I1208 00:51:21.520012  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44340 fb7107c1 'd') m d775 at 0 mt 1765155080 l 4096 t 0 d 0 ext )
I1208 00:51:21.520186  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 0 count 262120
I1208 00:51:21.520342  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 258
I1208 00:51:21.520505  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 258 count 261862
I1208 00:51:21.520546  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:21.520690  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:51:21.520719  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:21.520866  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1208 00:51:21.520900  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44342 fb7107c1 '') 
I1208 00:51:21.521019  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:21.521062  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44342 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.521219  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:21.521254  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44342 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.521384  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:21.521409  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:21.521554  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 2 0:'test-1765155080131625029' 
I1208 00:51:21.521589  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44348 fb7107c1 '') 
I1208 00:51:21.521702  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:21.521735  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('test-1765155080131625029' 'jenkins' 'jenkins' '' q (44348 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.521867  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:21.521902  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('test-1765155080131625029' 'jenkins' 'jenkins' '' q (44348 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.522030  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:21.522054  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:21.522190  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1208 00:51:21.522227  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44344 fb7107c1 '') 
I1208 00:51:21.522338  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:21.522371  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44344 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.522526  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:21.522560  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44344 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.522687  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:21.522714  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:21.522825  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:51:21.522852  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:21.522985  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 1
I1208 00:51:21.523011  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:21.818801  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 1 0:'test-1765155080131625029' 
I1208 00:51:21.818878  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44348 fb7107c1 '') 
I1208 00:51:21.819062  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 1
I1208 00:51:21.819112  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('test-1765155080131625029' 'jenkins' 'jenkins' '' q (44348 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.819253  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 1 newfid 2 
I1208 00:51:21.819288  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 
I1208 00:51:21.819428  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Topen tag 0 fid 2 mode 0
I1208 00:51:21.819477  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Ropen tag 0 qid (44348 fb7107c1 '') iounit 0
I1208 00:51:21.819613  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 1
I1208 00:51:21.819648  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('test-1765155080131625029' 'jenkins' 'jenkins' '' q (44348 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:21.819802  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 2 offset 0 count 262120
I1208 00:51:21.819848  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 24
I1208 00:51:21.819965  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 2 offset 24 count 262120
I1208 00:51:21.819992  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:21.820155  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 2 offset 24 count 262120
I1208 00:51:21.820203  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:21.820456  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:21.820488  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:21.820680  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 1
I1208 00:51:21.820710  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:22.165857  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 0
I1208 00:51:22.165951  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44340 fb7107c1 'd') m d775 at 0 mt 1765155080 l 4096 t 0 d 0 ext )
I1208 00:51:22.166319  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 1 
I1208 00:51:22.166362  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 
I1208 00:51:22.166505  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Topen tag 0 fid 1 mode 0
I1208 00:51:22.166557  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Ropen tag 0 qid (44340 fb7107c1 'd') iounit 0
I1208 00:51:22.166702  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 0
I1208 00:51:22.166757  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (44340 fb7107c1 'd') m d775 at 0 mt 1765155080 l 4096 t 0 d 0 ext )
I1208 00:51:22.166910  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 0 count 262120
I1208 00:51:22.167022  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 258
I1208 00:51:22.167172  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 258 count 261862
I1208 00:51:22.167204  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:22.167323  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:51:22.167351  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:22.167491  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1208 00:51:22.167534  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44342 fb7107c1 '') 
I1208 00:51:22.167647  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:22.167684  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44342 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:22.167823  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:22.167860  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (44342 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:22.167981  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:22.168004  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:22.168145  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 2 0:'test-1765155080131625029' 
I1208 00:51:22.168180  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44348 fb7107c1 '') 
I1208 00:51:22.168294  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:22.168356  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('test-1765155080131625029' 'jenkins' 'jenkins' '' q (44348 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:22.168507  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:22.168560  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('test-1765155080131625029' 'jenkins' 'jenkins' '' q (44348 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:22.168679  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:22.168730  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:22.168895  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1208 00:51:22.168947  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rwalk tag 0 (44344 fb7107c1 '') 
I1208 00:51:22.169075  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:22.169111  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44344 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:22.169268  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tstat tag 0 fid 2
I1208 00:51:22.169304  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (44344 fb7107c1 '') m 644 at 0 mt 1765155080 l 24 t 0 d 0 ext )
I1208 00:51:22.169429  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 2
I1208 00:51:22.169454  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:22.169593  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tread tag 0 fid 1 offset 258 count 262120
I1208 00:51:22.169639  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rread tag 0 count 0
I1208 00:51:22.169779  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 1
I1208 00:51:22.169811  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:22.171092  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1208 00:51:22.171180  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rerror tag 0 ename 'file not found' ecode 0
I1208 00:51:22.451795  912090 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:40104 Tclunk tag 0 fid 0
I1208 00:51:22.451845  912090 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:40104 Rclunk tag 0
I1208 00:51:22.453053  912090 main.go:127] stdlog: ufs.go:147 disconnected
I1208 00:51:22.475369  912090 out.go:179] * Unmounting /mount-9p ...
I1208 00:51:22.478221  912090 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1208 00:51:22.485049  912090 mount.go:180] unmount for /mount-9p ran successfully
I1208 00:51:22.485157  912090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/.mount-process: {Name:mk111f58a99259e3d9811f765582fe28c8b5865b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:51:22.488366  912090 out.go:203] 
W1208 00:51:22.491363  912090 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1208 00:51:22.494266  912090 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (795.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-614992 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-614992 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.524713969s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-614992
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-614992: (1.482661215s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-614992 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-614992 status --format={{.Host}}: exit status 7 (128.474392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-614992 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-614992 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m32.585233149s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-614992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-614992" primary control-plane node in "kubernetes-upgrade-614992" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:22:57.817519 1043229 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:22:57.817670 1043229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:22:57.817703 1043229 out.go:374] Setting ErrFile to fd 2...
	I1208 01:22:57.817726 1043229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:22:57.818033 1043229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:22:57.818487 1043229 out.go:368] Setting JSON to false
	I1208 01:22:57.819460 1043229 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":21931,"bootTime":1765135047,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:22:57.819534 1043229 start.go:143] virtualization:  
	I1208 01:22:57.827358 1043229 out.go:179] * [kubernetes-upgrade-614992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:22:57.830423 1043229 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:22:57.830538 1043229 notify.go:221] Checking for updates...
	I1208 01:22:57.836844 1043229 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:22:57.840084 1043229 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:22:57.843464 1043229 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:22:57.846409 1043229 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:22:57.849369 1043229 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:22:57.852818 1043229 config.go:182] Loaded profile config "kubernetes-upgrade-614992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1208 01:22:57.853429 1043229 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:22:57.899885 1043229 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:22:57.900208 1043229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:22:58.021431 1043229 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-08 01:22:58.008601029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:22:58.021548 1043229 docker.go:319] overlay module found
	I1208 01:22:58.025093 1043229 out.go:179] * Using the docker driver based on existing profile
	I1208 01:22:58.027900 1043229 start.go:309] selected driver: docker
	I1208 01:22:58.027921 1043229 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-614992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-614992 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:22:58.028013 1043229 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:22:58.028708 1043229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:22:58.120955 1043229 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-08 01:22:58.110537481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:22:58.121505 1043229 cni.go:84] Creating CNI manager for ""
	I1208 01:22:58.121695 1043229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:22:58.121758 1043229 start.go:353] cluster config:
	{Name:kubernetes-upgrade-614992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-614992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:22:58.126898 1043229 out.go:179] * Starting "kubernetes-upgrade-614992" primary control-plane node in "kubernetes-upgrade-614992" cluster
	I1208 01:22:58.129857 1043229 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:22:58.132816 1043229 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:22:58.135845 1043229 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:22:58.135895 1043229 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:22:58.135911 1043229 cache.go:65] Caching tarball of preloaded images
	I1208 01:22:58.135937 1043229 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:22:58.135995 1043229 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:22:58.136004 1043229 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:22:58.136109 1043229 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/config.json ...
	I1208 01:22:58.158703 1043229 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:22:58.158723 1043229 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:22:58.158737 1043229 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:22:58.158766 1043229 start.go:360] acquireMachinesLock for kubernetes-upgrade-614992: {Name:mka01314d817d9e338867f3f83678acdaee2f448 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:22:58.158831 1043229 start.go:364] duration metric: took 47.557µs to acquireMachinesLock for "kubernetes-upgrade-614992"
	I1208 01:22:58.158851 1043229 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:22:58.158855 1043229 fix.go:54] fixHost starting: 
	I1208 01:22:58.159270 1043229 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-614992 --format={{.State.Status}}
	I1208 01:22:58.180890 1043229 fix.go:112] recreateIfNeeded on kubernetes-upgrade-614992: state=Stopped err=<nil>
	W1208 01:22:58.180917 1043229 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:22:58.184198 1043229 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-614992" ...
	I1208 01:22:58.184280 1043229 cli_runner.go:164] Run: docker start kubernetes-upgrade-614992
	I1208 01:22:58.435056 1043229 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-614992 --format={{.State.Status}}
	I1208 01:22:58.458501 1043229 kic.go:430] container "kubernetes-upgrade-614992" state is running.
	I1208 01:22:58.461365 1043229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-614992
	I1208 01:22:58.482137 1043229 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/config.json ...
	I1208 01:22:58.482374 1043229 machine.go:94] provisionDockerMachine start ...
	I1208 01:22:58.484261 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:22:58.506070 1043229 main.go:143] libmachine: Using SSH client type: native
	I1208 01:22:58.506419 1043229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I1208 01:22:58.506439 1043229 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:22:58.508377 1043229 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:23:01.678273 1043229 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-614992
	
	I1208 01:23:01.678302 1043229 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-614992"
	I1208 01:23:01.678375 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:01.701301 1043229 main.go:143] libmachine: Using SSH client type: native
	I1208 01:23:01.701601 1043229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I1208 01:23:01.701612 1043229 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-614992 && echo "kubernetes-upgrade-614992" | sudo tee /etc/hostname
	I1208 01:23:01.882264 1043229 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-614992
	
	I1208 01:23:01.882348 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:01.910164 1043229 main.go:143] libmachine: Using SSH client type: native
	I1208 01:23:01.910522 1043229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33783 <nil> <nil>}
	I1208 01:23:01.910546 1043229 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-614992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-614992/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-614992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:23:02.075174 1043229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:23:02.075203 1043229 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:23:02.075258 1043229 ubuntu.go:190] setting up certificates
	I1208 01:23:02.075268 1043229 provision.go:84] configureAuth start
	I1208 01:23:02.075366 1043229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-614992
	I1208 01:23:02.093545 1043229 provision.go:143] copyHostCerts
	I1208 01:23:02.093629 1043229 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:23:02.093645 1043229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:23:02.093717 1043229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:23:02.093854 1043229 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:23:02.093867 1043229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:23:02.093891 1043229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:23:02.093953 1043229 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:23:02.093962 1043229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:23:02.093983 1043229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:23:02.094079 1043229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-614992 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-614992 localhost minikube]
	I1208 01:23:02.266890 1043229 provision.go:177] copyRemoteCerts
	I1208 01:23:02.267013 1043229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:23:02.267121 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:02.298725 1043229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kubernetes-upgrade-614992/id_rsa Username:docker}
	I1208 01:23:02.410681 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:23:02.431424 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:23:02.453179 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1208 01:23:02.472940 1043229 provision.go:87] duration metric: took 397.654828ms to configureAuth
	I1208 01:23:02.472983 1043229 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:23:02.473229 1043229 config.go:182] Loaded profile config "kubernetes-upgrade-614992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:23:02.473246 1043229 machine.go:97] duration metric: took 3.990854973s to provisionDockerMachine
	I1208 01:23:02.473256 1043229 start.go:293] postStartSetup for "kubernetes-upgrade-614992" (driver="docker")
	I1208 01:23:02.473293 1043229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:23:02.473371 1043229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:23:02.473462 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:02.493524 1043229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kubernetes-upgrade-614992/id_rsa Username:docker}
	I1208 01:23:02.608635 1043229 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:23:02.613205 1043229 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:23:02.613232 1043229 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:23:02.613244 1043229 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:23:02.613304 1043229 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:23:02.613380 1043229 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:23:02.613488 1043229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:23:02.623259 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:23:02.649804 1043229 start.go:296] duration metric: took 176.509789ms for postStartSetup
	I1208 01:23:02.649903 1043229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:23:02.649943 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:02.668139 1043229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kubernetes-upgrade-614992/id_rsa Username:docker}
	I1208 01:23:02.773564 1043229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:23:02.783132 1043229 fix.go:56] duration metric: took 4.624268407s for fixHost
	I1208 01:23:02.783161 1043229 start.go:83] releasing machines lock for "kubernetes-upgrade-614992", held for 4.624320813s
	I1208 01:23:02.783246 1043229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-614992
	I1208 01:23:02.812062 1043229 ssh_runner.go:195] Run: cat /version.json
	I1208 01:23:02.812269 1043229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:23:02.812333 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:02.812527 1043229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-614992
	I1208 01:23:02.846745 1043229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kubernetes-upgrade-614992/id_rsa Username:docker}
	I1208 01:23:02.868033 1043229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33783 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kubernetes-upgrade-614992/id_rsa Username:docker}
	I1208 01:23:02.971938 1043229 ssh_runner.go:195] Run: systemctl --version
	I1208 01:23:03.091574 1043229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:23:03.097407 1043229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:23:03.097478 1043229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:23:03.106025 1043229 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:23:03.106047 1043229 start.go:496] detecting cgroup driver to use...
	I1208 01:23:03.106078 1043229 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:23:03.106136 1043229 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:23:03.124607 1043229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:23:03.142354 1043229 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:23:03.142433 1043229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:23:03.158826 1043229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:23:03.175714 1043229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:23:03.324639 1043229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:23:03.475704 1043229 docker.go:234] disabling docker service ...
	I1208 01:23:03.475786 1043229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:23:03.497671 1043229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:23:03.515651 1043229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:23:03.673677 1043229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:23:03.831279 1043229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:23:03.845578 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:23:03.861770 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:23:03.873007 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:23:03.884926 1043229 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:23:03.884992 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:23:03.894605 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:23:03.904069 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:23:03.913868 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:23:03.923508 1043229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:23:03.932400 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:23:03.941463 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:23:03.952742 1043229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:23:03.963306 1043229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:23:03.972493 1043229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:23:03.981303 1043229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:23:04.135916 1043229 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:23:04.306862 1043229 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:23:04.306946 1043229 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:23:04.311597 1043229 start.go:564] Will wait 60s for crictl version
	I1208 01:23:04.311673 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:04.316139 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:23:04.360177 1043229 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:23:04.360291 1043229 ssh_runner.go:195] Run: containerd --version
	I1208 01:23:04.386763 1043229 ssh_runner.go:195] Run: containerd --version
	I1208 01:23:04.413772 1043229 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:23:04.416855 1043229 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-614992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:23:04.435091 1043229 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:23:04.439733 1043229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:23:04.449848 1043229 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-614992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-614992 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:23:04.449976 1043229 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:23:04.450047 1043229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:23:04.484837 1043229 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1208 01:23:04.484914 1043229 ssh_runner.go:195] Run: which lz4
	I1208 01:23:04.489003 1043229 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1208 01:23:04.495337 1043229 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 01:23:04.495373 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (305624510 bytes)
	I1208 01:23:06.517222 1043229 containerd.go:563] duration metric: took 2.028265547s to copy over tarball
	I1208 01:23:06.517324 1043229 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 01:23:08.811936 1043229 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.29458253s)
	I1208 01:23:08.812011 1043229 kubeadm.go:910] preload failed, will try to load cached images: extracting tarball: 
	** stderr ** 
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	
	** /stderr **: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: Process exited with status 2
	stdout:
	
	stderr:
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Europe: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Brazil: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Canada: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Antarctica: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Chile: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Etc: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Pacific: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Mexico: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Australia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/US: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Asia: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Atlantic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/America: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Arctic: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Africa: Cannot open: File exists
	tar: ./lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/3/fs/usr/share/zoneinfo/posix/Indian: Cannot open: File exists
	tar: Exiting with failure status due to previous errors
	I1208 01:23:08.812094 1043229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:23:08.893990 1043229 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1208 01:23:08.894056 1043229 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1208 01:23:08.894191 1043229 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:08.894407 1043229 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:08.894558 1043229 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:08.894651 1043229 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:08.894836 1043229 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:08.894942 1043229 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1208 01:23:08.895059 1043229 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:08.895210 1043229 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:23:08.898170 1043229 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:08.898584 1043229 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1208 01:23:08.898739 1043229 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:08.898868 1043229 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:08.898991 1043229 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:08.899106 1043229 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:08.899216 1043229 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:23:08.899529 1043229 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:09.234942 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1208 01:23:09.235048 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:09.259933 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1208 01:23:09.260014 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:09.287445 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1208 01:23:09.287536 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:09.344664 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1208 01:23:09.344742 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1208 01:23:09.395402 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1208 01:23:09.395490 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:09.406754 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1208 01:23:09.406846 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:09.421970 1043229 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1208 01:23:09.422050 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:09.483030 1043229 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1208 01:23:09.483097 1043229 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:09.483149 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.483205 1043229 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1208 01:23:09.483389 1043229 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:09.483425 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.483261 1043229 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1208 01:23:09.483464 1043229 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:09.483486 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.483294 1043229 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1208 01:23:09.483512 1043229 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1208 01:23:09.483544 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.491457 1043229 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1208 01:23:09.491543 1043229 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:09.491632 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.508975 1043229 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1208 01:23:09.509034 1043229 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:09.509100 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.509208 1043229 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1208 01:23:09.509229 1043229 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:09.509263 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:09.510739 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:23:09.510816 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:09.510841 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:09.510894 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:09.510953 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:09.523786 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:09.523953 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:09.643342 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:09.643448 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:23:09.643500 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:09.643530 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:09.643594 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:09.683911 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:09.684078 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:09.824093 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:23:09.824185 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:23:09.824254 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:23:09.824329 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:23:09.824396 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:23:09.827683 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:23:09.827748 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:23:09.939215 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:23:09.939303 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:23:09.939351 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:23:09.939410 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1208 01:23:09.939508 1043229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1208 01:23:09.939597 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:23:09.949966 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:23:09.950013 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1208 01:23:09.950064 1043229 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1208 01:23:09.950126 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1208 01:23:09.950228 1043229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:23:09.958144 1043229 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1208 01:23:09.958193 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1208 01:23:10.021865 1043229 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1208 01:23:10.021946 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1208 01:23:10.223420 1043229 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:23:10.223484 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	W1208 01:23:10.424269 1043229 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1208 01:23:10.424414 1043229 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1208 01:23:10.424493 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:23:11.179192 1043229 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1208 01:23:11.179286 1043229 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:23:11.179370 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:11.183960 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:23:11.333758 1043229 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1208 01:23:11.333874 1043229 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:23:11.338753 1043229 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1208 01:23:11.338808 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1208 01:23:11.460702 1043229 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:23:11.460855 1043229 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:23:12.027483 1043229 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1208 01:23:12.027612 1043229 cache_images.go:94] duration metric: took 3.133521937s to LoadCachedImages
	W1208 01:23:12.027817 1043229 out.go:285] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0: no such file or directory
	I1208 01:23:12.027873 1043229 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:23:12.028019 1043229 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-614992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-614992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:23:12.028122 1043229 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:23:12.070000 1043229 cni.go:84] Creating CNI manager for ""
	I1208 01:23:12.070028 1043229 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:23:12.070052 1043229 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:23:12.070075 1043229 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-614992 NodeName:kubernetes-upgrade-614992 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:23:12.070205 1043229 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-614992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:23:12.070283 1043229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:23:12.081108 1043229 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:23:12.081181 1043229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:23:12.090767 1043229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1208 01:23:12.107087 1043229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:23:12.123679 1043229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1208 01:23:12.140492 1043229 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:23:12.146087 1043229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:23:12.159508 1043229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:23:12.315757 1043229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:23:12.334043 1043229 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992 for IP: 192.168.76.2
	I1208 01:23:12.334115 1043229 certs.go:195] generating shared ca certs ...
	I1208 01:23:12.334148 1043229 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:23:12.334329 1043229 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:23:12.334414 1043229 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:23:12.334504 1043229 certs.go:257] generating profile certs ...
	I1208 01:23:12.334656 1043229 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.key
	I1208 01:23:12.334784 1043229 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/apiserver.key.6bc90071
	I1208 01:23:12.334920 1043229 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/proxy-client.key
	I1208 01:23:12.335094 1043229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:23:12.335165 1043229 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:23:12.335190 1043229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:23:12.335248 1043229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:23:12.335306 1043229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:23:12.335375 1043229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:23:12.335464 1043229 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:23:12.336353 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:23:12.414347 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:23:12.434511 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:23:12.462488 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:23:12.485758 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1208 01:23:12.506746 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:23:12.528257 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:23:12.551645 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:23:12.573363 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:23:12.590619 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:23:12.609411 1043229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:23:12.629662 1043229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:23:12.644940 1043229 ssh_runner.go:195] Run: openssl version
	I1208 01:23:12.652375 1043229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:23:12.661364 1043229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:23:12.670320 1043229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:23:12.674258 1043229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:23:12.674359 1043229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:23:12.715961 1043229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:23:12.723709 1043229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:23:12.731016 1043229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:23:12.739056 1043229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:23:12.743104 1043229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:23:12.743196 1043229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:23:12.786503 1043229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:23:12.794289 1043229 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:23:12.801641 1043229 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:23:12.809691 1043229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:23:12.813479 1043229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:23:12.813559 1043229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:23:12.855201 1043229 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:23:12.862667 1043229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:23:12.866390 1043229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:23:12.907641 1043229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:23:12.949705 1043229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:23:12.995491 1043229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:23:13.037902 1043229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:23:13.096663 1043229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:23:13.153092 1043229 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-614992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-614992 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:23:13.153218 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:23:13.153329 1043229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:23:13.185626 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:23:13.185660 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:23:13.185666 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:23:13.185669 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:23:13.185672 1043229 cri.go:89] found id: ""
	I1208 01:23:13.185739 1043229 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1208 01:23:13.201579 1043229 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-08T01:23:13Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1208 01:23:13.201652 1043229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:23:13.209870 1043229 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:23:13.209893 1043229 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:23:13.209953 1043229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:23:13.217784 1043229 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:23:13.218654 1043229 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-614992" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:23:13.219002 1043229 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-614992" cluster setting kubeconfig missing "kubernetes-upgrade-614992" context setting]
	I1208 01:23:13.219668 1043229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:23:13.220476 1043229 kapi.go:59] client config for kubernetes-upgrade-614992: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.crt", KeyFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.key", CAFile:"/home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 01:23:13.221108 1043229 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 01:23:13.221134 1043229 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 01:23:13.221144 1043229 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 01:23:13.221149 1043229 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 01:23:13.221154 1043229 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 01:23:13.221477 1043229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:23:13.233700 1043229 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-08 01:22:35.148112702 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-08 01:23:12.136473073 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-614992"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1208 01:23:13.233727 1043229 kubeadm.go:1161] stopping kube-system containers ...
	I1208 01:23:13.233739 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1208 01:23:13.233810 1043229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:23:13.262529 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:23:13.262552 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:23:13.262558 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:23:13.262562 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:23:13.262565 1043229 cri.go:89] found id: ""
	I1208 01:23:13.262570 1043229 cri.go:252] Stopping containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:23:13.262627 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:23:13.266294 1043229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771
	I1208 01:23:13.303849 1043229 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 01:23:13.319129 1043229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:23:13.327732 1043229 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec  8 01:22 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  8 01:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  8 01:22 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  8 01:22 /etc/kubernetes/scheduler.conf
	
	I1208 01:23:13.327826 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:23:13.336110 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:23:13.344518 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:23:13.352096 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:23:13.352164 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:23:13.361003 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:23:13.368869 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:23:13.368938 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:23:13.376660 1043229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:23:13.384905 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:23:13.433100 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:23:14.411654 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:23:14.627678 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:23:14.681503 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 01:23:14.730836 1043229 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:23:14.730966 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:15.232115 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:15.731152 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:16.231174 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:16.731161 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:17.231261 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:17.731123 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:18.231137 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:18.731996 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:19.231157 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:19.731901 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:20.231185 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:20.731119 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:21.232092 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:21.731805 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:22.232093 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:22.731131 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:23.231714 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:23.731936 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:24.231130 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:24.731611 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:25.231147 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:25.732068 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:26.231104 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:26.731939 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:27.231657 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:27.732035 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:28.231966 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:28.731048 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:29.231286 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:29.731690 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:30.231212 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:30.731745 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:31.231863 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:31.731641 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:32.231848 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:32.731744 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:33.231935 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:33.731822 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:34.231157 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:34.731757 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:35.231763 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:35.731652 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:36.231871 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:36.731082 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:37.231135 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:37.731089 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:38.231792 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:38.731877 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:39.231985 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:39.732048 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:40.231139 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:40.732046 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:41.231151 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:41.731936 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:42.231813 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:42.731813 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:43.231441 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:43.732175 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:44.232111 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:44.731476 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:45.234783 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:45.731042 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:46.231694 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:46.731155 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:47.231671 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:47.731309 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:48.231686 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:48.731166 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:49.231728 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:49.731683 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:50.231870 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:50.731628 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:51.231759 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:51.731778 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:52.231100 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:52.731085 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:53.231355 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:53.731119 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:54.231173 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:54.731119 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:55.231406 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:55.731995 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:56.231143 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:56.731093 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:57.231817 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:57.731314 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:58.231301 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:58.731844 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:59.231673 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:23:59.731782 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:00.231197 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:00.732010 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:01.231663 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:01.731876 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:02.231735 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:02.731616 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:03.231424 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:03.732020 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:04.231917 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:04.731139 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:05.231265 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:05.731099 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:06.231804 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:06.731319 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:07.231749 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:07.731308 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:08.231597 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:08.731645 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:09.231463 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:09.731681 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:10.231044 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:10.731761 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:11.231665 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:11.731750 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:12.231377 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:12.731682 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:13.231312 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:13.731523 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:14.231634 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:14.731156 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:14.731248 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:14.768814 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:14.768833 1043229 cri.go:89] found id: ""
	I1208 01:24:14.768842 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:14.768909 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:14.773157 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:14.773227 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:14.818990 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:14.819010 1043229 cri.go:89] found id: ""
	I1208 01:24:14.819018 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:14.819075 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:14.823231 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:14.823303 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:14.868729 1043229 cri.go:89] found id: ""
	I1208 01:24:14.868751 1043229 logs.go:282] 0 containers: []
	W1208 01:24:14.868759 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:14.868765 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:14.868821 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:14.913621 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:14.913641 1043229 cri.go:89] found id: ""
	I1208 01:24:14.913650 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:14.913716 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:14.918963 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:14.919034 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:14.956443 1043229 cri.go:89] found id: ""
	I1208 01:24:14.956530 1043229 logs.go:282] 0 containers: []
	W1208 01:24:14.956554 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:14.956590 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:14.956687 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:14.996877 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:14.996897 1043229 cri.go:89] found id: ""
	I1208 01:24:14.996906 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:14.996962 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:15.001373 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:15.001453 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:15.055553 1043229 cri.go:89] found id: ""
	I1208 01:24:15.055580 1043229 logs.go:282] 0 containers: []
	W1208 01:24:15.055589 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:15.055596 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:15.055672 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:15.092587 1043229 cri.go:89] found id: ""
	I1208 01:24:15.092618 1043229 logs.go:282] 0 containers: []
	W1208 01:24:15.092627 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:15.092641 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:15.092655 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:15.126100 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:15.126131 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:15.201568 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:15.201604 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:15.227682 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:15.227797 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:15.302037 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:15.302208 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:15.351803 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:15.351875 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:15.387830 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:15.387863 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:15.422018 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:15.422055 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:15.508086 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:15.508109 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:15.508123 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:18.059329 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:18.079026 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:18.079112 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:18.193011 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:18.193039 1043229 cri.go:89] found id: ""
	I1208 01:24:18.193049 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:18.193139 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:18.213546 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:18.213644 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:18.297289 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:18.297314 1043229 cri.go:89] found id: ""
	I1208 01:24:18.297334 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:18.297394 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:18.303032 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:18.303121 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:18.340070 1043229 cri.go:89] found id: ""
	I1208 01:24:18.340102 1043229 logs.go:282] 0 containers: []
	W1208 01:24:18.340115 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:18.340130 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:18.340198 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:18.375615 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:18.375648 1043229 cri.go:89] found id: ""
	I1208 01:24:18.375658 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:18.375721 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:18.380990 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:18.381103 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:18.421939 1043229 cri.go:89] found id: ""
	I1208 01:24:18.421987 1043229 logs.go:282] 0 containers: []
	W1208 01:24:18.421997 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:18.422053 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:18.422161 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:18.455564 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:18.455608 1043229 cri.go:89] found id: ""
	I1208 01:24:18.455618 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:18.455694 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:18.460824 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:18.460940 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:18.497324 1043229 cri.go:89] found id: ""
	I1208 01:24:18.497364 1043229 logs.go:282] 0 containers: []
	W1208 01:24:18.497378 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:18.497391 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:18.497477 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:18.529031 1043229 cri.go:89] found id: ""
	I1208 01:24:18.529067 1043229 logs.go:282] 0 containers: []
	W1208 01:24:18.529076 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:18.529089 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:18.529105 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:18.571135 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:18.571185 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:18.628827 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:18.628898 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:18.648590 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:18.648661 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:18.689000 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:18.689076 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:18.725505 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:18.725588 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:18.766818 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:18.766896 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:18.830904 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:18.830985 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:18.918840 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:18.918859 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:18.918872 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:21.482566 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:21.497145 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:21.497225 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:21.546217 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:21.546247 1043229 cri.go:89] found id: ""
	I1208 01:24:21.546255 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:21.546310 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:21.551347 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:21.551419 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:21.583849 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:21.583873 1043229 cri.go:89] found id: ""
	I1208 01:24:21.583882 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:21.583937 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:21.587895 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:21.587970 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:21.622257 1043229 cri.go:89] found id: ""
	I1208 01:24:21.622284 1043229 logs.go:282] 0 containers: []
	W1208 01:24:21.622295 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:21.622301 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:21.622367 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:21.656737 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:21.656759 1043229 cri.go:89] found id: ""
	I1208 01:24:21.656767 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:21.656837 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:21.660702 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:21.660768 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:21.696788 1043229 cri.go:89] found id: ""
	I1208 01:24:21.696819 1043229 logs.go:282] 0 containers: []
	W1208 01:24:21.696829 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:21.696835 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:21.696897 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:21.732310 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:21.732334 1043229 cri.go:89] found id: ""
	I1208 01:24:21.732345 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:21.732400 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:21.736223 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:21.736294 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:21.772617 1043229 cri.go:89] found id: ""
	I1208 01:24:21.772645 1043229 logs.go:282] 0 containers: []
	W1208 01:24:21.772654 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:21.772665 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:21.772722 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:21.816816 1043229 cri.go:89] found id: ""
	I1208 01:24:21.816842 1043229 logs.go:282] 0 containers: []
	W1208 01:24:21.816850 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:21.816863 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:21.816874 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:21.857399 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:21.857434 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:21.920302 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:21.920338 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:21.935416 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:21.935443 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:22.019550 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:22.019574 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:22.019587 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:22.063393 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:22.063437 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:22.124245 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:22.124279 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:22.193140 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:22.193181 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:22.240507 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:22.240539 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:24.785007 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:24.795608 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:24.795676 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:24.824147 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:24.824167 1043229 cri.go:89] found id: ""
	I1208 01:24:24.824175 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:24.824236 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:24.828534 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:24.828605 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:24.868863 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:24.868884 1043229 cri.go:89] found id: ""
	I1208 01:24:24.868892 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:24.868951 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:24.873658 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:24.873740 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:24.904892 1043229 cri.go:89] found id: ""
	I1208 01:24:24.904921 1043229 logs.go:282] 0 containers: []
	W1208 01:24:24.904931 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:24.904937 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:24.905023 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:24.943875 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:24.943962 1043229 cri.go:89] found id: ""
	I1208 01:24:24.943992 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:24.944090 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:24.949284 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:24.949439 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:24.989919 1043229 cri.go:89] found id: ""
	I1208 01:24:24.989996 1043229 logs.go:282] 0 containers: []
	W1208 01:24:24.990019 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:24.990038 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:24.990125 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:25.026112 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:25.026196 1043229 cri.go:89] found id: ""
	I1208 01:24:25.026221 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:25.026323 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:25.031781 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:25.031912 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:25.065110 1043229 cri.go:89] found id: ""
	I1208 01:24:25.065197 1043229 logs.go:282] 0 containers: []
	W1208 01:24:25.065220 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:25.065240 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:25.065337 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:25.100479 1043229 cri.go:89] found id: ""
	I1208 01:24:25.100556 1043229 logs.go:282] 0 containers: []
	W1208 01:24:25.100599 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:25.100634 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:25.100688 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:25.141203 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:25.146542 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:25.236348 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:25.236441 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:25.277336 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:25.277410 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:25.311317 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:25.311391 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:25.344970 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:25.345050 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:25.388731 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:25.388755 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:25.408290 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:25.408365 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:25.475866 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:25.475962 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:25.559408 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:28.059623 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:28.076602 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:28.076692 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:28.139008 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:28.139041 1043229 cri.go:89] found id: ""
	I1208 01:24:28.139051 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:28.139134 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:28.143371 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:28.143470 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:28.205718 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:28.205749 1043229 cri.go:89] found id: ""
	I1208 01:24:28.205757 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:28.205837 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:28.211454 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:28.211544 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:28.261010 1043229 cri.go:89] found id: ""
	I1208 01:24:28.261036 1043229 logs.go:282] 0 containers: []
	W1208 01:24:28.261047 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:28.261053 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:28.261124 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:28.296851 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:28.296875 1043229 cri.go:89] found id: ""
	I1208 01:24:28.296884 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:28.296947 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:28.301207 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:28.301281 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:28.340948 1043229 cri.go:89] found id: ""
	I1208 01:24:28.340978 1043229 logs.go:282] 0 containers: []
	W1208 01:24:28.340987 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:28.340994 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:28.341054 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:28.373642 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:28.373664 1043229 cri.go:89] found id: ""
	I1208 01:24:28.373673 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:28.373760 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:28.377912 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:28.378009 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:28.409002 1043229 cri.go:89] found id: ""
	I1208 01:24:28.409033 1043229 logs.go:282] 0 containers: []
	W1208 01:24:28.409043 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:28.409050 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:28.409123 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:28.440623 1043229 cri.go:89] found id: ""
	I1208 01:24:28.440654 1043229 logs.go:282] 0 containers: []
	W1208 01:24:28.440664 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:28.440679 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:28.440694 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:28.505001 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:28.505041 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:28.528265 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:28.528294 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:28.581773 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:28.581808 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:28.645385 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:28.645418 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:28.677649 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:28.677684 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:28.722014 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:28.722042 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:28.796010 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:28.796042 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:28.796056 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:28.838851 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:28.838930 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:31.385609 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:31.396669 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:31.396745 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:31.427619 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:31.427646 1043229 cri.go:89] found id: ""
	I1208 01:24:31.427654 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:31.427713 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:31.431925 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:31.432001 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:31.469284 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:31.469310 1043229 cri.go:89] found id: ""
	I1208 01:24:31.469318 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:31.469376 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:31.473811 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:31.473920 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:31.502926 1043229 cri.go:89] found id: ""
	I1208 01:24:31.502956 1043229 logs.go:282] 0 containers: []
	W1208 01:24:31.502965 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:31.502972 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:31.503031 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:31.561199 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:31.561225 1043229 cri.go:89] found id: ""
	I1208 01:24:31.561234 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:31.561299 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:31.565057 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:31.565139 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:31.617665 1043229 cri.go:89] found id: ""
	I1208 01:24:31.617694 1043229 logs.go:282] 0 containers: []
	W1208 01:24:31.617703 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:31.617718 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:31.617780 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:31.662946 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:31.662973 1043229 cri.go:89] found id: ""
	I1208 01:24:31.662982 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:31.663053 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:31.671602 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:31.671697 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:31.713243 1043229 cri.go:89] found id: ""
	I1208 01:24:31.713273 1043229 logs.go:282] 0 containers: []
	W1208 01:24:31.713283 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:31.713296 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:31.713361 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:31.772815 1043229 cri.go:89] found id: ""
	I1208 01:24:31.772845 1043229 logs.go:282] 0 containers: []
	W1208 01:24:31.772854 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:31.772867 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:31.772886 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:31.842416 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:31.842556 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:31.930030 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:31.930067 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:32.103527 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:32.103545 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:32.103559 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:32.170358 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:32.170434 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:32.245275 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:32.245437 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:32.293964 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:32.294045 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:32.356342 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:32.356415 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:32.456294 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:32.456393 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:34.978846 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:34.991472 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:34.991547 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:35.039561 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:35.039595 1043229 cri.go:89] found id: ""
	I1208 01:24:35.039603 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:35.039663 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:35.048163 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:35.048241 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:35.085969 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:35.085990 1043229 cri.go:89] found id: ""
	I1208 01:24:35.085998 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:35.086059 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:35.099687 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:35.099774 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:35.146500 1043229 cri.go:89] found id: ""
	I1208 01:24:35.146529 1043229 logs.go:282] 0 containers: []
	W1208 01:24:35.146539 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:35.146546 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:35.146608 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:35.197379 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:35.197425 1043229 cri.go:89] found id: ""
	I1208 01:24:35.197435 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:35.197496 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:35.203188 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:35.203289 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:35.250943 1043229 cri.go:89] found id: ""
	I1208 01:24:35.250974 1043229 logs.go:282] 0 containers: []
	W1208 01:24:35.250982 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:35.250989 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:35.251053 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:35.304072 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:35.304092 1043229 cri.go:89] found id: ""
	I1208 01:24:35.304100 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:35.304158 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:35.308706 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:35.308821 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:35.339337 1043229 cri.go:89] found id: ""
	I1208 01:24:35.339359 1043229 logs.go:282] 0 containers: []
	W1208 01:24:35.339367 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:35.339373 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:35.339434 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:35.373231 1043229 cri.go:89] found id: ""
	I1208 01:24:35.373253 1043229 logs.go:282] 0 containers: []
	W1208 01:24:35.373261 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:35.373273 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:35.373285 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:35.434179 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:35.434217 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:35.502734 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:35.502807 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:35.502835 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:35.542313 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:35.542345 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:35.575686 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:35.575725 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:35.617804 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:35.617833 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:35.661521 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:35.661552 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:35.676668 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:35.676698 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:35.723676 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:35.723708 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:38.263346 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:38.273945 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:38.274020 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:38.306055 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:38.306088 1043229 cri.go:89] found id: ""
	I1208 01:24:38.306097 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:38.306157 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:38.309957 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:38.310038 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:38.351634 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:38.351881 1043229 cri.go:89] found id: ""
	I1208 01:24:38.351919 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:38.352002 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:38.360489 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:38.360566 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:38.412238 1043229 cri.go:89] found id: ""
	I1208 01:24:38.412262 1043229 logs.go:282] 0 containers: []
	W1208 01:24:38.412271 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:38.412277 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:38.412343 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:38.454590 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:38.454610 1043229 cri.go:89] found id: ""
	I1208 01:24:38.454618 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:38.454677 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:38.459126 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:38.459195 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:38.490784 1043229 cri.go:89] found id: ""
	I1208 01:24:38.490850 1043229 logs.go:282] 0 containers: []
	W1208 01:24:38.490862 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:38.490868 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:38.490964 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:38.528677 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:38.528753 1043229 cri.go:89] found id: ""
	I1208 01:24:38.528775 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:38.528863 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:38.533657 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:38.533798 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:38.564924 1043229 cri.go:89] found id: ""
	I1208 01:24:38.565002 1043229 logs.go:282] 0 containers: []
	W1208 01:24:38.565025 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:38.565066 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:38.565152 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:38.600979 1043229 cri.go:89] found id: ""
	I1208 01:24:38.601056 1043229 logs.go:282] 0 containers: []
	W1208 01:24:38.601081 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:38.601120 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:38.601149 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:38.679177 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:38.679261 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:38.699802 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:38.699834 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:38.820800 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:38.820826 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:38.820839 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:38.863164 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:38.863202 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:38.920184 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:38.920220 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:38.964378 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:38.964411 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:39.016883 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:39.016919 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:39.072203 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:39.072238 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:41.633826 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:41.650587 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:41.650658 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:41.687625 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:41.687648 1043229 cri.go:89] found id: ""
	I1208 01:24:41.687656 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:41.687714 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:41.691480 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:41.691552 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:41.720997 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:41.721020 1043229 cri.go:89] found id: ""
	I1208 01:24:41.721029 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:41.721104 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:41.724787 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:41.724862 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:41.752143 1043229 cri.go:89] found id: ""
	I1208 01:24:41.752167 1043229 logs.go:282] 0 containers: []
	W1208 01:24:41.752175 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:41.752185 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:41.752253 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:41.778500 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:41.778521 1043229 cri.go:89] found id: ""
	I1208 01:24:41.778529 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:41.778589 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:41.782301 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:41.782378 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:41.811450 1043229 cri.go:89] found id: ""
	I1208 01:24:41.811474 1043229 logs.go:282] 0 containers: []
	W1208 01:24:41.811484 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:41.811491 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:41.811555 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:41.837782 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:41.837806 1043229 cri.go:89] found id: ""
	I1208 01:24:41.837814 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:41.837876 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:41.841757 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:41.841835 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:41.868175 1043229 cri.go:89] found id: ""
	I1208 01:24:41.868199 1043229 logs.go:282] 0 containers: []
	W1208 01:24:41.868208 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:41.868214 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:41.868277 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:41.892688 1043229 cri.go:89] found id: ""
	I1208 01:24:41.892715 1043229 logs.go:282] 0 containers: []
	W1208 01:24:41.892724 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:41.892741 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:41.892753 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:41.952967 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:41.953007 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:41.969848 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:41.969874 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:42.025471 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:42.025569 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:42.065509 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:42.065552 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:42.127490 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:42.127860 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:42.201881 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:42.201978 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:42.305456 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:42.305476 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:42.305489 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:42.377530 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:42.377564 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:44.936829 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:44.947233 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:44.947306 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:44.973649 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:44.973681 1043229 cri.go:89] found id: ""
	I1208 01:24:44.973690 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:44.973758 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:44.977734 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:44.977810 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:45.003745 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:45.003767 1043229 cri.go:89] found id: ""
	I1208 01:24:45.003776 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:45.003846 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:45.009532 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:45.009621 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:45.073867 1043229 cri.go:89] found id: ""
	I1208 01:24:45.073893 1043229 logs.go:282] 0 containers: []
	W1208 01:24:45.073905 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:45.073913 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:45.073992 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:45.179177 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:45.179202 1043229 cri.go:89] found id: ""
	I1208 01:24:45.179212 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:45.179284 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:45.185038 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:45.185123 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:45.219049 1043229 cri.go:89] found id: ""
	I1208 01:24:45.219075 1043229 logs.go:282] 0 containers: []
	W1208 01:24:45.219084 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:45.219091 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:45.219278 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:45.291956 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:45.291991 1043229 cri.go:89] found id: ""
	I1208 01:24:45.292035 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:45.292171 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:45.300816 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:45.300912 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:45.337927 1043229 cri.go:89] found id: ""
	I1208 01:24:45.337997 1043229 logs.go:282] 0 containers: []
	W1208 01:24:45.338023 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:45.338041 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:45.338124 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:45.375318 1043229 cri.go:89] found id: ""
	I1208 01:24:45.375403 1043229 logs.go:282] 0 containers: []
	W1208 01:24:45.375438 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:45.375480 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:45.375532 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:45.451523 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:45.451561 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:45.451574 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:45.484781 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:45.484813 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:45.517307 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:45.517341 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:45.550715 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:45.550755 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:45.608798 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:45.608836 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:45.625151 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:45.625226 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:45.662654 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:45.662689 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:45.692330 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:45.692365 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:48.236138 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:48.247313 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:48.247386 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:48.281630 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:48.281652 1043229 cri.go:89] found id: ""
	I1208 01:24:48.281660 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:48.281732 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:48.287726 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:48.287802 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:48.327384 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:48.327404 1043229 cri.go:89] found id: ""
	I1208 01:24:48.327413 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:48.327470 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:48.333015 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:48.333100 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:48.365250 1043229 cri.go:89] found id: ""
	I1208 01:24:48.365272 1043229 logs.go:282] 0 containers: []
	W1208 01:24:48.365281 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:48.365288 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:48.365353 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:48.409908 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:48.409930 1043229 cri.go:89] found id: ""
	I1208 01:24:48.409938 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:48.409995 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:48.414693 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:48.414822 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:48.451015 1043229 cri.go:89] found id: ""
	I1208 01:24:48.451037 1043229 logs.go:282] 0 containers: []
	W1208 01:24:48.451046 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:48.451052 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:48.451109 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:48.480985 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:48.481062 1043229 cri.go:89] found id: ""
	I1208 01:24:48.481085 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:48.481177 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:48.485464 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:48.485592 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:48.520297 1043229 cri.go:89] found id: ""
	I1208 01:24:48.520373 1043229 logs.go:282] 0 containers: []
	W1208 01:24:48.520395 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:48.520417 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:48.520509 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:48.573163 1043229 cri.go:89] found id: ""
	I1208 01:24:48.573247 1043229 logs.go:282] 0 containers: []
	W1208 01:24:48.573270 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:48.573299 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:48.573334 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:48.621762 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:48.621802 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:48.681669 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:48.681753 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:48.734602 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:48.734632 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:48.808368 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:48.808451 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:49.003999 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:49.004020 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:49.004034 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:49.065578 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:49.065667 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:49.101636 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:49.101709 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:49.156915 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:49.156998 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:51.675841 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:51.687113 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:51.687179 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:51.717068 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:51.717089 1043229 cri.go:89] found id: ""
	I1208 01:24:51.717098 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:51.717159 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:51.722993 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:51.723083 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:51.755729 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:51.755801 1043229 cri.go:89] found id: ""
	I1208 01:24:51.755825 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:51.755898 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:51.760082 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:51.760155 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:51.789082 1043229 cri.go:89] found id: ""
	I1208 01:24:51.789106 1043229 logs.go:282] 0 containers: []
	W1208 01:24:51.789115 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:51.789121 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:51.789182 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:51.824112 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:51.824132 1043229 cri.go:89] found id: ""
	I1208 01:24:51.824141 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:51.824203 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:51.829389 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:51.829465 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:51.864952 1043229 cri.go:89] found id: ""
	I1208 01:24:51.864975 1043229 logs.go:282] 0 containers: []
	W1208 01:24:51.864983 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:51.864990 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:51.865051 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:51.905516 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:51.905597 1043229 cri.go:89] found id: ""
	I1208 01:24:51.905625 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:51.905698 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:51.909814 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:51.909935 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:51.941381 1043229 cri.go:89] found id: ""
	I1208 01:24:51.941405 1043229 logs.go:282] 0 containers: []
	W1208 01:24:51.941413 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:51.941420 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:51.941480 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:51.975961 1043229 cri.go:89] found id: ""
	I1208 01:24:51.975984 1043229 logs.go:282] 0 containers: []
	W1208 01:24:51.975992 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:51.976006 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:51.976020 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:52.024455 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:52.024536 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:52.142195 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:52.142215 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:52.142228 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:52.214036 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:52.214121 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:52.272532 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:52.272618 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:52.305853 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:52.305936 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:52.368797 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:52.368880 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:52.384375 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:52.384456 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:52.429903 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:52.429941 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:54.966585 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:54.979280 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:54.979350 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:55.034837 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:55.034862 1043229 cri.go:89] found id: ""
	I1208 01:24:55.034871 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:55.034939 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:55.043252 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:55.043347 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:55.088762 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:55.088787 1043229 cri.go:89] found id: ""
	I1208 01:24:55.088797 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:55.088868 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:55.095793 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:55.095875 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:55.144273 1043229 cri.go:89] found id: ""
	I1208 01:24:55.144302 1043229 logs.go:282] 0 containers: []
	W1208 01:24:55.144311 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:55.144317 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:55.144379 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:55.188284 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:55.188311 1043229 cri.go:89] found id: ""
	I1208 01:24:55.188331 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:55.188391 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:55.206798 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:55.206897 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:55.261181 1043229 cri.go:89] found id: ""
	I1208 01:24:55.261210 1043229 logs.go:282] 0 containers: []
	W1208 01:24:55.261229 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:55.261235 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:55.261323 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:55.303328 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:55.303353 1043229 cri.go:89] found id: ""
	I1208 01:24:55.303361 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:55.303432 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:55.307672 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:55.307777 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:55.384058 1043229 cri.go:89] found id: ""
	I1208 01:24:55.384085 1043229 logs.go:282] 0 containers: []
	W1208 01:24:55.384094 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:55.384128 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:55.384218 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:55.461328 1043229 cri.go:89] found id: ""
	I1208 01:24:55.461368 1043229 logs.go:282] 0 containers: []
	W1208 01:24:55.461378 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:55.461393 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:55.461406 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:55.575347 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:55.575388 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:55.664636 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:55.664673 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:55.753694 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:55.753736 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:55.803581 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:55.803613 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:55.847875 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:55.847913 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:55.879558 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:55.879588 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:56.012522 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:56.012549 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:56.012564 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:56.114923 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:56.115003 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:58.711668 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:24:58.725841 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:24:58.725922 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:24:58.769509 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:24:58.769546 1043229 cri.go:89] found id: ""
	I1208 01:24:58.769555 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:24:58.769641 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:58.775166 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:24:58.775289 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:24:58.813639 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:58.813666 1043229 cri.go:89] found id: ""
	I1208 01:24:58.813675 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:24:58.813795 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:58.819048 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:24:58.819149 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:24:58.856108 1043229 cri.go:89] found id: ""
	I1208 01:24:58.856153 1043229 logs.go:282] 0 containers: []
	W1208 01:24:58.856166 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:24:58.856174 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:24:58.856279 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:24:58.894130 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:58.894179 1043229 cri.go:89] found id: ""
	I1208 01:24:58.894193 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:24:58.894284 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:58.900101 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:24:58.900198 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:24:58.944425 1043229 cri.go:89] found id: ""
	I1208 01:24:58.944465 1043229 logs.go:282] 0 containers: []
	W1208 01:24:58.944474 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:24:58.944481 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:24:58.944567 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:24:58.981395 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:58.981419 1043229 cri.go:89] found id: ""
	I1208 01:24:58.981427 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:24:58.981489 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:24:58.986063 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:24:58.986144 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:24:59.026856 1043229 cri.go:89] found id: ""
	I1208 01:24:59.026885 1043229 logs.go:282] 0 containers: []
	W1208 01:24:59.026896 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:24:59.026903 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:24:59.026965 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:24:59.063329 1043229 cri.go:89] found id: ""
	I1208 01:24:59.063355 1043229 logs.go:282] 0 containers: []
	W1208 01:24:59.063369 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:24:59.063385 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:24:59.063400 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:24:59.124080 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:24:59.124120 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:24:59.160741 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:24:59.160775 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:24:59.218322 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:24:59.218363 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:24:59.258678 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:24:59.258721 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:24:59.287029 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:24:59.287058 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:24:59.351707 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:24:59.351790 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:24:59.369378 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:24:59.369405 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:24:59.456432 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:24:59.456452 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:24:59.456465 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:02.010848 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:02.022959 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:02.023047 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:02.051967 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:02.051996 1043229 cri.go:89] found id: ""
	I1208 01:25:02.052006 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:02.052076 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:02.056309 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:02.056391 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:02.082647 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:02.082672 1043229 cri.go:89] found id: ""
	I1208 01:25:02.082682 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:02.082748 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:02.086609 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:02.086692 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:02.113327 1043229 cri.go:89] found id: ""
	I1208 01:25:02.113351 1043229 logs.go:282] 0 containers: []
	W1208 01:25:02.113360 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:02.113367 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:02.113431 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:02.139308 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:02.139331 1043229 cri.go:89] found id: ""
	I1208 01:25:02.139340 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:02.139399 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:02.143298 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:02.143388 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:02.169240 1043229 cri.go:89] found id: ""
	I1208 01:25:02.169316 1043229 logs.go:282] 0 containers: []
	W1208 01:25:02.169340 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:02.169359 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:02.169447 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:02.197305 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:02.197325 1043229 cri.go:89] found id: ""
	I1208 01:25:02.197333 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:02.197400 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:02.201321 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:02.201396 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:02.228365 1043229 cri.go:89] found id: ""
	I1208 01:25:02.228389 1043229 logs.go:282] 0 containers: []
	W1208 01:25:02.228398 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:02.228404 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:02.228474 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:02.262579 1043229 cri.go:89] found id: ""
	I1208 01:25:02.262650 1043229 logs.go:282] 0 containers: []
	W1208 01:25:02.262671 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:02.262685 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:02.262699 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:02.325986 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:02.326008 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:02.326023 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:02.361159 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:02.361192 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:02.395406 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:02.395443 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:02.428496 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:02.428530 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:02.457430 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:02.457460 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:02.514848 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:02.514887 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:02.549313 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:02.549347 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:02.579109 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:02.579145 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:05.101941 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:05.112624 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:05.112699 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:05.140771 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:05.140793 1043229 cri.go:89] found id: ""
	I1208 01:25:05.140803 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:05.140863 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:05.144769 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:05.144846 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:05.175250 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:05.175274 1043229 cri.go:89] found id: ""
	I1208 01:25:05.175284 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:05.175352 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:05.179523 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:05.179599 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:05.204548 1043229 cri.go:89] found id: ""
	I1208 01:25:05.204575 1043229 logs.go:282] 0 containers: []
	W1208 01:25:05.204585 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:05.204591 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:05.204652 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:05.231350 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:05.231376 1043229 cri.go:89] found id: ""
	I1208 01:25:05.231389 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:05.231453 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:05.235305 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:05.235385 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:05.261609 1043229 cri.go:89] found id: ""
	I1208 01:25:05.261636 1043229 logs.go:282] 0 containers: []
	W1208 01:25:05.261645 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:05.261651 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:05.261729 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:05.288356 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:05.288387 1043229 cri.go:89] found id: ""
	I1208 01:25:05.288397 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:05.288460 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:05.292590 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:05.292698 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:05.322539 1043229 cri.go:89] found id: ""
	I1208 01:25:05.322584 1043229 logs.go:282] 0 containers: []
	W1208 01:25:05.322594 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:05.322602 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:05.322676 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:05.352712 1043229 cri.go:89] found id: ""
	I1208 01:25:05.352737 1043229 logs.go:282] 0 containers: []
	W1208 01:25:05.352746 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:05.352760 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:05.352790 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:05.387315 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:05.387350 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:05.402931 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:05.402962 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:05.438229 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:05.438263 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:05.471033 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:05.471075 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:05.505401 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:05.505436 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:05.535243 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:05.535279 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:05.569475 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:05.569502 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:05.642339 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:05.642378 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:05.716972 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:08.217205 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:08.227991 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:08.228066 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:08.254872 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:08.254915 1043229 cri.go:89] found id: ""
	I1208 01:25:08.254925 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:08.254997 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:08.258683 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:08.258761 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:08.285784 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:08.285808 1043229 cri.go:89] found id: ""
	I1208 01:25:08.285816 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:08.285878 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:08.289782 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:08.289869 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:08.315854 1043229 cri.go:89] found id: ""
	I1208 01:25:08.315877 1043229 logs.go:282] 0 containers: []
	W1208 01:25:08.315885 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:08.315892 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:08.315958 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:08.346206 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:08.346250 1043229 cri.go:89] found id: ""
	I1208 01:25:08.346259 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:08.346329 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:08.350136 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:08.350230 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:08.376726 1043229 cri.go:89] found id: ""
	I1208 01:25:08.376794 1043229 logs.go:282] 0 containers: []
	W1208 01:25:08.376810 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:08.376818 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:08.376889 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:08.405294 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:08.405330 1043229 cri.go:89] found id: ""
	I1208 01:25:08.405339 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:08.405410 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:08.409324 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:08.409411 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:08.435170 1043229 cri.go:89] found id: ""
	I1208 01:25:08.435246 1043229 logs.go:282] 0 containers: []
	W1208 01:25:08.435263 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:08.435272 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:08.435337 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:08.462140 1043229 cri.go:89] found id: ""
	I1208 01:25:08.462180 1043229 logs.go:282] 0 containers: []
	W1208 01:25:08.462190 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:08.462207 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:08.462223 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:08.521940 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:08.521977 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:08.556030 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:08.556062 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:08.593974 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:08.594068 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:08.628419 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:08.628503 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:08.645058 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:08.645083 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:08.710954 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:08.710976 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:08.710990 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:08.746674 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:08.746711 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:08.779005 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:08.779038 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:11.309194 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:11.319831 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:11.319906 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:11.345754 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:11.345775 1043229 cri.go:89] found id: ""
	I1208 01:25:11.345784 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:11.345845 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:11.349574 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:11.349647 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:11.382752 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:11.382778 1043229 cri.go:89] found id: ""
	I1208 01:25:11.382787 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:11.382843 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:11.386479 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:11.386560 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:11.415743 1043229 cri.go:89] found id: ""
	I1208 01:25:11.415768 1043229 logs.go:282] 0 containers: []
	W1208 01:25:11.415783 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:11.415790 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:11.415855 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:11.445450 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:11.445471 1043229 cri.go:89] found id: ""
	I1208 01:25:11.445479 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:11.445538 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:11.449160 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:11.449282 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:11.475018 1043229 cri.go:89] found id: ""
	I1208 01:25:11.475044 1043229 logs.go:282] 0 containers: []
	W1208 01:25:11.475053 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:11.475059 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:11.475149 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:11.501984 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:11.502007 1043229 cri.go:89] found id: ""
	I1208 01:25:11.502016 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:11.502101 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:11.505921 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:11.506021 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:11.531452 1043229 cri.go:89] found id: ""
	I1208 01:25:11.531520 1043229 logs.go:282] 0 containers: []
	W1208 01:25:11.531536 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:11.531546 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:11.531607 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:11.557178 1043229 cri.go:89] found id: ""
	I1208 01:25:11.557205 1043229 logs.go:282] 0 containers: []
	W1208 01:25:11.557215 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:11.557230 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:11.557241 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:11.619951 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:11.619996 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:11.636559 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:11.636646 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:11.675595 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:11.675630 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:11.714131 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:11.714162 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:11.745679 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:11.745714 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:11.777877 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:11.777912 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:11.816808 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:11.816837 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:11.879107 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:11.879129 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:11.879142 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:14.409508 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:14.420096 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:14.420248 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:14.449578 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:14.449602 1043229 cri.go:89] found id: ""
	I1208 01:25:14.449610 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:14.449670 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:14.453631 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:14.453707 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:14.479211 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:14.479233 1043229 cri.go:89] found id: ""
	I1208 01:25:14.479247 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:14.479313 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:14.483353 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:14.483446 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:14.508606 1043229 cri.go:89] found id: ""
	I1208 01:25:14.508644 1043229 logs.go:282] 0 containers: []
	W1208 01:25:14.508655 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:14.508661 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:14.508721 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:14.536806 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:14.536830 1043229 cri.go:89] found id: ""
	I1208 01:25:14.536840 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:14.536928 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:14.540906 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:14.540983 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:14.567663 1043229 cri.go:89] found id: ""
	I1208 01:25:14.567687 1043229 logs.go:282] 0 containers: []
	W1208 01:25:14.567695 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:14.567702 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:14.567763 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:14.606554 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:14.606578 1043229 cri.go:89] found id: ""
	I1208 01:25:14.606586 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:14.606645 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:14.610917 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:14.611042 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:14.646688 1043229 cri.go:89] found id: ""
	I1208 01:25:14.646710 1043229 logs.go:282] 0 containers: []
	W1208 01:25:14.646719 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:14.646725 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:14.646788 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:14.679509 1043229 cri.go:89] found id: ""
	I1208 01:25:14.679535 1043229 logs.go:282] 0 containers: []
	W1208 01:25:14.679545 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:14.679562 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:14.679573 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:14.744333 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:14.744356 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:14.744369 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:14.776587 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:14.776622 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:14.819552 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:14.819586 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:14.881411 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:14.881449 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:14.897003 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:14.897035 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:14.933312 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:14.933343 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:14.972398 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:14.972433 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:15.003383 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:15.003422 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:17.566925 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:17.577581 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:17.577656 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:17.612126 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:17.612150 1043229 cri.go:89] found id: ""
	I1208 01:25:17.612160 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:17.612217 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:17.616494 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:17.616574 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:17.651655 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:17.651679 1043229 cri.go:89] found id: ""
	I1208 01:25:17.651687 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:17.651747 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:17.655940 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:17.656010 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:17.684899 1043229 cri.go:89] found id: ""
	I1208 01:25:17.684961 1043229 logs.go:282] 0 containers: []
	W1208 01:25:17.684987 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:17.685006 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:17.685083 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:17.712857 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:17.712930 1043229 cri.go:89] found id: ""
	I1208 01:25:17.712953 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:17.713023 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:17.716857 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:17.716972 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:17.747261 1043229 cri.go:89] found id: ""
	I1208 01:25:17.747284 1043229 logs.go:282] 0 containers: []
	W1208 01:25:17.747293 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:17.747299 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:17.747363 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:17.773731 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:17.773757 1043229 cri.go:89] found id: ""
	I1208 01:25:17.773765 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:17.773825 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:17.777626 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:17.777715 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:17.805512 1043229 cri.go:89] found id: ""
	I1208 01:25:17.805538 1043229 logs.go:282] 0 containers: []
	W1208 01:25:17.805547 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:17.805554 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:17.805616 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:17.833146 1043229 cri.go:89] found id: ""
	I1208 01:25:17.833175 1043229 logs.go:282] 0 containers: []
	W1208 01:25:17.833185 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:17.833201 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:17.833213 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:17.869714 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:17.869751 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:17.899580 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:17.899610 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:17.957647 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:17.957684 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:17.974107 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:17.974142 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:18.063430 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:18.063458 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:18.063473 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:18.109706 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:18.109793 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:18.142261 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:18.142296 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:18.182600 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:18.182638 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:20.717384 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:20.727551 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:20.727625 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:20.757132 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:20.757155 1043229 cri.go:89] found id: ""
	I1208 01:25:20.757163 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:20.757225 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:20.760897 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:20.760967 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:20.791483 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:20.791507 1043229 cri.go:89] found id: ""
	I1208 01:25:20.791515 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:20.791571 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:20.795274 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:20.795354 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:20.820712 1043229 cri.go:89] found id: ""
	I1208 01:25:20.820734 1043229 logs.go:282] 0 containers: []
	W1208 01:25:20.820742 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:20.820748 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:20.820809 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:20.849224 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:20.849244 1043229 cri.go:89] found id: ""
	I1208 01:25:20.849252 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:20.849308 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:20.853045 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:20.853114 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:20.878532 1043229 cri.go:89] found id: ""
	I1208 01:25:20.878569 1043229 logs.go:282] 0 containers: []
	W1208 01:25:20.878579 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:20.878604 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:20.878688 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:20.906097 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:20.906122 1043229 cri.go:89] found id: ""
	I1208 01:25:20.906130 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:20.906191 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:20.909799 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:20.909872 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:20.934675 1043229 cri.go:89] found id: ""
	I1208 01:25:20.934701 1043229 logs.go:282] 0 containers: []
	W1208 01:25:20.934709 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:20.934716 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:20.934775 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:20.961126 1043229 cri.go:89] found id: ""
	I1208 01:25:20.961150 1043229 logs.go:282] 0 containers: []
	W1208 01:25:20.961159 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:20.961173 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:20.961191 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:21.027079 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:21.027125 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:21.045602 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:21.045632 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:21.079776 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:21.079812 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:21.108296 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:21.108330 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:21.182770 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:21.182842 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:21.182870 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:21.219300 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:21.219331 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:21.252298 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:21.252332 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:21.285896 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:21.285929 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:23.816127 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:23.827107 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:23.827183 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:23.853235 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:23.853259 1043229 cri.go:89] found id: ""
	I1208 01:25:23.853268 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:23.853331 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:23.857095 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:23.857171 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:23.882948 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:23.882975 1043229 cri.go:89] found id: ""
	I1208 01:25:23.882984 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:23.883045 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:23.886989 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:23.887087 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:23.918571 1043229 cri.go:89] found id: ""
	I1208 01:25:23.918673 1043229 logs.go:282] 0 containers: []
	W1208 01:25:23.918706 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:23.918747 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:23.918869 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:23.955474 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:23.955511 1043229 cri.go:89] found id: ""
	I1208 01:25:23.955521 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:23.955621 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:23.959935 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:23.960054 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:23.986315 1043229 cri.go:89] found id: ""
	I1208 01:25:23.986358 1043229 logs.go:282] 0 containers: []
	W1208 01:25:23.986369 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:23.986376 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:23.986476 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:24.020151 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:24.020197 1043229 cri.go:89] found id: ""
	I1208 01:25:24.020207 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:24.020307 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:24.027812 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:24.027920 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:24.056484 1043229 cri.go:89] found id: ""
	I1208 01:25:24.056516 1043229 logs.go:282] 0 containers: []
	W1208 01:25:24.056526 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:24.056534 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:24.056639 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:24.084646 1043229 cri.go:89] found id: ""
	I1208 01:25:24.084672 1043229 logs.go:282] 0 containers: []
	W1208 01:25:24.084693 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:24.084710 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:24.084724 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:24.120078 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:24.120109 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:24.151582 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:24.151618 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:24.188440 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:24.188545 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:24.221384 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:24.221418 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:24.254987 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:24.255020 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:24.286376 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:24.286405 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:24.345829 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:24.345863 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:24.366403 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:24.366434 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:24.442808 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:26.943029 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:26.953513 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:26.953584 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:26.979024 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:26.979060 1043229 cri.go:89] found id: ""
	I1208 01:25:26.979071 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:26.979145 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:26.983038 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:26.983117 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:27.012391 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:27.012415 1043229 cri.go:89] found id: ""
	I1208 01:25:27.012424 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:27.012486 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:27.016767 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:27.016847 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:27.047714 1043229 cri.go:89] found id: ""
	I1208 01:25:27.047736 1043229 logs.go:282] 0 containers: []
	W1208 01:25:27.047746 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:27.047758 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:27.047818 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:27.073149 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:27.073172 1043229 cri.go:89] found id: ""
	I1208 01:25:27.073181 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:27.073237 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:27.076965 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:27.077039 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:27.106776 1043229 cri.go:89] found id: ""
	I1208 01:25:27.106801 1043229 logs.go:282] 0 containers: []
	W1208 01:25:27.106810 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:27.106816 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:27.106877 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:27.137322 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:27.137346 1043229 cri.go:89] found id: ""
	I1208 01:25:27.137354 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:27.137413 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:27.141537 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:27.141611 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:27.166502 1043229 cri.go:89] found id: ""
	I1208 01:25:27.166528 1043229 logs.go:282] 0 containers: []
	W1208 01:25:27.166538 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:27.166545 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:27.166608 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:27.200291 1043229 cri.go:89] found id: ""
	I1208 01:25:27.200361 1043229 logs.go:282] 0 containers: []
	W1208 01:25:27.200377 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:27.200394 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:27.200407 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:27.271235 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:27.271275 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:27.319905 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:27.319934 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:27.352263 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:27.352293 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:27.473005 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:27.473028 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:27.473043 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:27.510146 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:27.510180 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:27.548142 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:27.548223 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:27.603768 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:27.603845 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:27.639585 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:27.639664 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:30.204334 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:30.215523 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:30.215599 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:30.245615 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:30.245642 1043229 cri.go:89] found id: ""
	I1208 01:25:30.245651 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:30.245712 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:30.250041 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:30.250130 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:30.277967 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:30.277993 1043229 cri.go:89] found id: ""
	I1208 01:25:30.278001 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:30.278067 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:30.282338 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:30.282485 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:30.314235 1043229 cri.go:89] found id: ""
	I1208 01:25:30.314261 1043229 logs.go:282] 0 containers: []
	W1208 01:25:30.314270 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:30.314276 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:30.314340 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:30.344820 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:30.344899 1043229 cri.go:89] found id: ""
	I1208 01:25:30.344922 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:30.345016 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:30.349498 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:30.349624 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:30.383731 1043229 cri.go:89] found id: ""
	I1208 01:25:30.383803 1043229 logs.go:282] 0 containers: []
	W1208 01:25:30.383826 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:30.383846 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:30.383937 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:30.417000 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:30.417088 1043229 cri.go:89] found id: ""
	I1208 01:25:30.417115 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:30.417204 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:30.425246 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:30.425409 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:30.458000 1043229 cri.go:89] found id: ""
	I1208 01:25:30.458069 1043229 logs.go:282] 0 containers: []
	W1208 01:25:30.458092 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:30.458116 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:30.458207 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:30.483790 1043229 cri.go:89] found id: ""
	I1208 01:25:30.483814 1043229 logs.go:282] 0 containers: []
	W1208 01:25:30.483823 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:30.483837 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:30.483849 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:30.547256 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:30.547326 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:30.547347 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:30.582201 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:30.582234 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:30.615289 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:30.615322 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:30.650069 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:30.650103 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:30.682972 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:30.683004 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:30.740216 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:30.740252 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:30.778621 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:30.778703 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:30.812917 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:30.812961 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:33.330146 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:33.342629 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:33.342790 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:33.374776 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:33.374857 1043229 cri.go:89] found id: ""
	I1208 01:25:33.374896 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:33.374988 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:33.379770 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:33.379927 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:33.410989 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:33.411012 1043229 cri.go:89] found id: ""
	I1208 01:25:33.411020 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:33.411080 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:33.415064 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:33.415151 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:33.441872 1043229 cri.go:89] found id: ""
	I1208 01:25:33.441943 1043229 logs.go:282] 0 containers: []
	W1208 01:25:33.441968 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:33.441987 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:33.442090 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:33.469085 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:33.469108 1043229 cri.go:89] found id: ""
	I1208 01:25:33.469117 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:33.469185 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:33.473252 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:33.473339 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:33.500762 1043229 cri.go:89] found id: ""
	I1208 01:25:33.500787 1043229 logs.go:282] 0 containers: []
	W1208 01:25:33.500795 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:33.500802 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:33.500861 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:33.528600 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:33.528627 1043229 cri.go:89] found id: ""
	I1208 01:25:33.528635 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:33.528694 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:33.532673 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:33.532785 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:33.559214 1043229 cri.go:89] found id: ""
	I1208 01:25:33.559293 1043229 logs.go:282] 0 containers: []
	W1208 01:25:33.559317 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:33.559340 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:33.559429 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:33.587468 1043229 cri.go:89] found id: ""
	I1208 01:25:33.587537 1043229 logs.go:282] 0 containers: []
	W1208 01:25:33.587551 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:33.587565 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:33.587577 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:33.619721 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:33.619751 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:33.682116 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:33.682150 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:33.697512 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:33.697543 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:33.761413 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:33.761436 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:33.761449 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:33.795694 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:33.795733 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:33.824744 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:33.824780 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:33.862940 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:33.862972 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:33.895717 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:33.895751 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:36.427284 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:36.438611 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:36.438680 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:36.466565 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:36.466585 1043229 cri.go:89] found id: ""
	I1208 01:25:36.466593 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:36.466652 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:36.471457 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:36.471534 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:36.505068 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:36.505090 1043229 cri.go:89] found id: ""
	I1208 01:25:36.505099 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:36.505159 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:36.509641 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:36.509796 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:36.545023 1043229 cri.go:89] found id: ""
	I1208 01:25:36.545099 1043229 logs.go:282] 0 containers: []
	W1208 01:25:36.545122 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:36.545141 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:36.545231 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:36.588974 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:36.589046 1043229 cri.go:89] found id: ""
	I1208 01:25:36.589069 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:36.589161 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:36.594060 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:36.594190 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:36.637280 1043229 cri.go:89] found id: ""
	I1208 01:25:36.637355 1043229 logs.go:282] 0 containers: []
	W1208 01:25:36.637378 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:36.637398 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:36.637498 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:36.681174 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:36.681247 1043229 cri.go:89] found id: ""
	I1208 01:25:36.681270 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:36.681361 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:36.685638 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:36.685787 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:36.720976 1043229 cri.go:89] found id: ""
	I1208 01:25:36.721130 1043229 logs.go:282] 0 containers: []
	W1208 01:25:36.721153 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:36.721172 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:36.721257 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:36.769166 1043229 cri.go:89] found id: ""
	I1208 01:25:36.769242 1043229 logs.go:282] 0 containers: []
	W1208 01:25:36.769271 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:36.769298 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:36.769350 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:36.838320 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:36.838391 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:36.838419 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:36.871053 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:36.871086 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:36.917195 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:36.917270 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:36.976931 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:36.976970 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:36.992505 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:36.992584 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:37.046362 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:37.046398 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:37.084111 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:37.084144 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:37.135371 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:37.135406 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:39.666549 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:39.677067 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:39.677141 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:39.703289 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:39.703310 1043229 cri.go:89] found id: ""
	I1208 01:25:39.703318 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:39.703378 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:39.707220 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:39.707296 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:39.732266 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:39.732288 1043229 cri.go:89] found id: ""
	I1208 01:25:39.732297 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:39.732400 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:39.736173 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:39.736249 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:39.766327 1043229 cri.go:89] found id: ""
	I1208 01:25:39.766352 1043229 logs.go:282] 0 containers: []
	W1208 01:25:39.766365 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:39.766372 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:39.766435 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:39.793653 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:39.793688 1043229 cri.go:89] found id: ""
	I1208 01:25:39.793697 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:39.793769 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:39.797717 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:39.797825 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:39.825268 1043229 cri.go:89] found id: ""
	I1208 01:25:39.825295 1043229 logs.go:282] 0 containers: []
	W1208 01:25:39.825304 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:39.825311 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:39.825382 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:39.852770 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:39.852794 1043229 cri.go:89] found id: ""
	I1208 01:25:39.852803 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:39.852863 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:39.856756 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:39.856926 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:39.885979 1043229 cri.go:89] found id: ""
	I1208 01:25:39.886006 1043229 logs.go:282] 0 containers: []
	W1208 01:25:39.886016 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:39.886023 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:39.886085 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:39.913065 1043229 cri.go:89] found id: ""
	I1208 01:25:39.913091 1043229 logs.go:282] 0 containers: []
	W1208 01:25:39.913101 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:39.913117 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:39.913130 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:39.946244 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:39.946279 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:39.980633 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:39.980723 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:40.015114 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:40.015161 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:40.069263 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:40.069296 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:40.136574 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:40.136622 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:40.155576 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:40.155608 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:40.229124 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:40.229157 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:40.229177 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:40.266099 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:40.266134 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:42.802545 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:42.812942 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:42.813070 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:42.840480 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:42.840506 1043229 cri.go:89] found id: ""
	I1208 01:25:42.840515 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:42.840594 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:42.844404 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:42.844480 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:42.873409 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:42.873428 1043229 cri.go:89] found id: ""
	I1208 01:25:42.873436 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:42.873493 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:42.877403 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:42.877474 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:42.903082 1043229 cri.go:89] found id: ""
	I1208 01:25:42.903105 1043229 logs.go:282] 0 containers: []
	W1208 01:25:42.903114 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:42.903120 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:42.903181 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:42.927611 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:42.927680 1043229 cri.go:89] found id: ""
	I1208 01:25:42.927702 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:42.927781 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:42.931264 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:42.931337 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:42.955643 1043229 cri.go:89] found id: ""
	I1208 01:25:42.955667 1043229 logs.go:282] 0 containers: []
	W1208 01:25:42.955676 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:42.955683 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:42.955739 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:42.980811 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:42.980848 1043229 cri.go:89] found id: ""
	I1208 01:25:42.980856 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:42.980923 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:42.985061 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:42.985244 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:43.017808 1043229 cri.go:89] found id: ""
	I1208 01:25:43.017876 1043229 logs.go:282] 0 containers: []
	W1208 01:25:43.017901 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:43.017935 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:43.018015 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:43.050006 1043229 cri.go:89] found id: ""
	I1208 01:25:43.050076 1043229 logs.go:282] 0 containers: []
	W1208 01:25:43.050100 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:43.050126 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:43.050166 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:43.087649 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:43.087682 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:43.122120 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:43.122206 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:43.187182 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:43.187221 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:43.202971 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:43.203018 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:43.240293 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:43.240325 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:43.276160 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:43.276189 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:43.346829 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:43.346847 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:43.346861 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:43.387693 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:43.387732 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:45.921435 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:45.932024 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:45.932094 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:45.958801 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:45.958837 1043229 cri.go:89] found id: ""
	I1208 01:25:45.958845 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:45.958913 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:45.962721 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:45.962797 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:45.989424 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:45.989452 1043229 cri.go:89] found id: ""
	I1208 01:25:45.989461 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:45.989522 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:45.993559 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:45.993639 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:46.026914 1043229 cri.go:89] found id: ""
	I1208 01:25:46.026943 1043229 logs.go:282] 0 containers: []
	W1208 01:25:46.026952 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:46.026958 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:46.027023 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:46.055216 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:46.055243 1043229 cri.go:89] found id: ""
	I1208 01:25:46.055258 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:46.055324 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:46.059264 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:46.059342 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:46.085118 1043229 cri.go:89] found id: ""
	I1208 01:25:46.085144 1043229 logs.go:282] 0 containers: []
	W1208 01:25:46.085153 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:46.085160 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:46.085223 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:46.112532 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:46.112557 1043229 cri.go:89] found id: ""
	I1208 01:25:46.112566 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:46.112624 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:46.116999 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:46.117071 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:46.148900 1043229 cri.go:89] found id: ""
	I1208 01:25:46.148927 1043229 logs.go:282] 0 containers: []
	W1208 01:25:46.148937 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:46.148943 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:46.149002 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:46.176507 1043229 cri.go:89] found id: ""
	I1208 01:25:46.176575 1043229 logs.go:282] 0 containers: []
	W1208 01:25:46.176590 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:46.176608 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:46.176621 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:46.192799 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:46.192876 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:46.254752 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:46.254818 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:46.254845 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:46.290006 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:46.290039 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:46.319393 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:46.319428 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:46.356464 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:46.356491 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:46.419824 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:46.419861 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:46.454874 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:46.454911 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:46.489337 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:46.489370 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:49.025996 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:49.037060 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:49.037140 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:49.063378 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:49.063401 1043229 cri.go:89] found id: ""
	I1208 01:25:49.063409 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:49.063467 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:49.067126 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:49.067204 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:49.095018 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:49.095043 1043229 cri.go:89] found id: ""
	I1208 01:25:49.095052 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:49.095115 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:49.099901 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:49.099976 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:49.126737 1043229 cri.go:89] found id: ""
	I1208 01:25:49.126764 1043229 logs.go:282] 0 containers: []
	W1208 01:25:49.126773 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:49.126781 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:49.126841 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:49.156812 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:49.156843 1043229 cri.go:89] found id: ""
	I1208 01:25:49.156853 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:49.156912 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:49.161422 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:49.161499 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:49.186657 1043229 cri.go:89] found id: ""
	I1208 01:25:49.186681 1043229 logs.go:282] 0 containers: []
	W1208 01:25:49.186690 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:49.186697 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:49.186758 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:49.217027 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:49.217055 1043229 cri.go:89] found id: ""
	I1208 01:25:49.217076 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:49.217147 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:49.221128 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:49.221209 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:49.249684 1043229 cri.go:89] found id: ""
	I1208 01:25:49.249712 1043229 logs.go:282] 0 containers: []
	W1208 01:25:49.249757 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:49.249769 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:49.249849 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:49.277531 1043229 cri.go:89] found id: ""
	I1208 01:25:49.277557 1043229 logs.go:282] 0 containers: []
	W1208 01:25:49.277566 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:49.277580 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:49.277593 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:49.317217 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:49.317252 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:49.348616 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:49.348651 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:49.379017 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:49.379060 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:49.447519 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:49.447542 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:49.447555 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:49.484482 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:49.484514 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:49.522853 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:49.522886 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:49.556773 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:49.556805 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:49.615957 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:49.615995 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:52.131681 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:52.143069 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:52.143142 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:52.168253 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:52.168274 1043229 cri.go:89] found id: ""
	I1208 01:25:52.168282 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:52.168343 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:52.172022 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:52.172104 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:52.196794 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:52.196814 1043229 cri.go:89] found id: ""
	I1208 01:25:52.196822 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:52.196877 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:52.200670 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:52.200786 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:52.226059 1043229 cri.go:89] found id: ""
	I1208 01:25:52.226082 1043229 logs.go:282] 0 containers: []
	W1208 01:25:52.226090 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:52.226097 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:52.226155 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:52.252424 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:52.252447 1043229 cri.go:89] found id: ""
	I1208 01:25:52.252456 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:52.252513 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:52.256302 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:52.256374 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:52.282510 1043229 cri.go:89] found id: ""
	I1208 01:25:52.282532 1043229 logs.go:282] 0 containers: []
	W1208 01:25:52.282542 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:52.282548 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:52.282617 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:52.307622 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:52.307690 1043229 cri.go:89] found id: ""
	I1208 01:25:52.307712 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:52.307801 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:52.314798 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:52.314918 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:52.344702 1043229 cri.go:89] found id: ""
	I1208 01:25:52.344772 1043229 logs.go:282] 0 containers: []
	W1208 01:25:52.344793 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:52.344800 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:52.344873 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:52.373599 1043229 cri.go:89] found id: ""
	I1208 01:25:52.373623 1043229 logs.go:282] 0 containers: []
	W1208 01:25:52.373632 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:52.373650 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:52.373663 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:52.402416 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:52.402460 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:52.434276 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:52.434308 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:52.495837 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:52.495881 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:52.511278 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:52.511305 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:52.549780 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:52.549812 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:52.616059 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:52.616091 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:52.616105 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:52.657375 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:52.657410 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:52.690335 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:52.690370 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:55.226606 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:55.240640 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:55.240712 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:55.278351 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:55.278372 1043229 cri.go:89] found id: ""
	I1208 01:25:55.278380 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:55.278471 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:55.282991 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:55.283100 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:55.314154 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:55.314181 1043229 cri.go:89] found id: ""
	I1208 01:25:55.314190 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:55.314248 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:55.319429 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:55.319508 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:55.345962 1043229 cri.go:89] found id: ""
	I1208 01:25:55.345991 1043229 logs.go:282] 0 containers: []
	W1208 01:25:55.346000 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:55.346006 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:55.346066 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:55.383535 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:55.383563 1043229 cri.go:89] found id: ""
	I1208 01:25:55.383572 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:55.383631 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:55.387424 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:55.387500 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:55.419462 1043229 cri.go:89] found id: ""
	I1208 01:25:55.419487 1043229 logs.go:282] 0 containers: []
	W1208 01:25:55.419495 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:55.419502 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:55.419560 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:55.462093 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:55.462117 1043229 cri.go:89] found id: ""
	I1208 01:25:55.462127 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:55.462188 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:55.466113 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:55.466184 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:55.500957 1043229 cri.go:89] found id: ""
	I1208 01:25:55.500985 1043229 logs.go:282] 0 containers: []
	W1208 01:25:55.500994 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:55.501000 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:55.501064 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:55.533338 1043229 cri.go:89] found id: ""
	I1208 01:25:55.533368 1043229 logs.go:282] 0 containers: []
	W1208 01:25:55.533379 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:55.533392 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:55.533414 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:55.572942 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:55.572977 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:55.621222 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:55.621252 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:55.656474 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:55.656510 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:25:55.737938 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:55.738025 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:55.804145 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:55.804250 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:55.947098 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:55.947133 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:55.947154 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:56.007846 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:56.007974 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:56.037532 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:56.037572 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:58.587195 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:25:58.599852 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:25:58.599963 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:25:58.629172 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:58.629191 1043229 cri.go:89] found id: ""
	I1208 01:25:58.629200 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:25:58.629260 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:58.634007 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:25:58.634080 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:25:58.661187 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:58.661208 1043229 cri.go:89] found id: ""
	I1208 01:25:58.661216 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:25:58.661275 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:58.665238 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:25:58.665309 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:25:58.695654 1043229 cri.go:89] found id: ""
	I1208 01:25:58.695677 1043229 logs.go:282] 0 containers: []
	W1208 01:25:58.695686 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:25:58.695692 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:25:58.695751 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:25:58.724889 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:58.724912 1043229 cri.go:89] found id: ""
	I1208 01:25:58.724922 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:25:58.724982 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:58.728800 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:25:58.728876 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:25:58.756803 1043229 cri.go:89] found id: ""
	I1208 01:25:58.756827 1043229 logs.go:282] 0 containers: []
	W1208 01:25:58.756835 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:25:58.756841 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:25:58.756905 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:25:58.782388 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:58.782496 1043229 cri.go:89] found id: ""
	I1208 01:25:58.782521 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:25:58.782593 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:25:58.786441 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:25:58.786546 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:25:58.813231 1043229 cri.go:89] found id: ""
	I1208 01:25:58.813255 1043229 logs.go:282] 0 containers: []
	W1208 01:25:58.813264 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:25:58.813270 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:25:58.813330 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:25:58.867335 1043229 cri.go:89] found id: ""
	I1208 01:25:58.867406 1043229 logs.go:282] 0 containers: []
	W1208 01:25:58.867429 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:25:58.867458 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:25:58.867496 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:25:58.894479 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:25:58.894560 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:25:59.009044 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:25:59.009072 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:25:59.009088 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:25:59.051027 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:25:59.051062 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:25:59.092640 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:25:59.092676 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:25:59.125086 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:25:59.125123 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:25:59.189622 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:25:59.189660 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:25:59.238215 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:25:59.238247 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:25:59.273449 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:25:59.273484 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:01.814332 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:01.825248 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:01.825327 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:01.865261 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:01.865282 1043229 cri.go:89] found id: ""
	I1208 01:26:01.865290 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:01.865357 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:01.869825 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:01.869899 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:01.899629 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:01.899649 1043229 cri.go:89] found id: ""
	I1208 01:26:01.899657 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:01.899715 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:01.904232 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:01.904303 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:01.932893 1043229 cri.go:89] found id: ""
	I1208 01:26:01.932918 1043229 logs.go:282] 0 containers: []
	W1208 01:26:01.932926 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:01.932933 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:01.932992 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:01.963478 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:01.963505 1043229 cri.go:89] found id: ""
	I1208 01:26:01.963519 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:01.963583 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:01.967721 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:01.967796 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:01.994284 1043229 cri.go:89] found id: ""
	I1208 01:26:01.994309 1043229 logs.go:282] 0 containers: []
	W1208 01:26:01.994318 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:01.994325 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:01.994387 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:02.029650 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:02.029685 1043229 cri.go:89] found id: ""
	I1208 01:26:02.029695 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:02.029769 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:02.033819 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:02.033927 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:02.062698 1043229 cri.go:89] found id: ""
	I1208 01:26:02.062721 1043229 logs.go:282] 0 containers: []
	W1208 01:26:02.062730 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:02.062737 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:02.062806 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:02.093305 1043229 cri.go:89] found id: ""
	I1208 01:26:02.093330 1043229 logs.go:282] 0 containers: []
	W1208 01:26:02.093340 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:02.093353 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:02.093365 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:02.108916 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:02.108943 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:02.178638 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:02.178663 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:02.178678 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:02.213946 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:02.213981 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:02.249127 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:02.249170 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:02.286686 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:02.286719 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:02.318252 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:02.318292 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:02.376801 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:02.376840 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:02.415854 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:02.415888 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:04.962637 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:04.973281 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:04.973354 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:05.001029 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:05.001054 1043229 cri.go:89] found id: ""
	I1208 01:26:05.001062 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:05.001123 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:05.005885 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:05.005982 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:05.047391 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:05.047415 1043229 cri.go:89] found id: ""
	I1208 01:26:05.047424 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:05.047482 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:05.051512 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:05.051596 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:05.078722 1043229 cri.go:89] found id: ""
	I1208 01:26:05.078746 1043229 logs.go:282] 0 containers: []
	W1208 01:26:05.078755 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:05.078761 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:05.078822 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:05.108453 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:05.108475 1043229 cri.go:89] found id: ""
	I1208 01:26:05.108483 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:05.108541 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:05.112711 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:05.112794 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:05.139301 1043229 cri.go:89] found id: ""
	I1208 01:26:05.139325 1043229 logs.go:282] 0 containers: []
	W1208 01:26:05.139333 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:05.139340 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:05.139406 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:05.169436 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:05.169461 1043229 cri.go:89] found id: ""
	I1208 01:26:05.169470 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:05.169528 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:05.173482 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:05.173568 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:05.201033 1043229 cri.go:89] found id: ""
	I1208 01:26:05.201058 1043229 logs.go:282] 0 containers: []
	W1208 01:26:05.201067 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:05.201074 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:05.201135 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:05.227849 1043229 cri.go:89] found id: ""
	I1208 01:26:05.227921 1043229 logs.go:282] 0 containers: []
	W1208 01:26:05.227937 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:05.227951 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:05.227969 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:05.258032 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:05.258064 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:05.316175 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:05.316214 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:05.331850 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:05.331884 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:05.371930 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:05.371969 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:05.408322 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:05.408357 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:05.475497 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:05.475562 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:05.475581 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:05.508755 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:05.508788 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:05.545764 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:05.545800 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:08.076042 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:08.086957 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:08.087038 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:08.114948 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:08.114971 1043229 cri.go:89] found id: ""
	I1208 01:26:08.114980 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:08.115040 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:08.118827 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:08.118903 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:08.145337 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:08.145360 1043229 cri.go:89] found id: ""
	I1208 01:26:08.145370 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:08.145428 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:08.149160 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:08.149239 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:08.174909 1043229 cri.go:89] found id: ""
	I1208 01:26:08.174934 1043229 logs.go:282] 0 containers: []
	W1208 01:26:08.174944 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:08.174951 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:08.175030 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:08.201583 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:08.201607 1043229 cri.go:89] found id: ""
	I1208 01:26:08.201615 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:08.201673 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:08.205347 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:08.205429 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:08.231420 1043229 cri.go:89] found id: ""
	I1208 01:26:08.231445 1043229 logs.go:282] 0 containers: []
	W1208 01:26:08.231453 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:08.231460 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:08.231520 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:08.256330 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:08.256350 1043229 cri.go:89] found id: ""
	I1208 01:26:08.256358 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:08.256415 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:08.260155 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:08.260228 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:08.289466 1043229 cri.go:89] found id: ""
	I1208 01:26:08.289490 1043229 logs.go:282] 0 containers: []
	W1208 01:26:08.289498 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:08.289506 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:08.289572 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:08.315741 1043229 cri.go:89] found id: ""
	I1208 01:26:08.315765 1043229 logs.go:282] 0 containers: []
	W1208 01:26:08.315775 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:08.315789 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:08.315805 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:08.379632 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:08.379652 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:08.379666 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:08.414709 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:08.414745 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:08.450039 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:08.450079 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:08.482886 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:08.482920 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:08.542687 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:08.542723 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:08.558261 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:08.558292 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:08.592092 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:08.592166 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:08.624376 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:08.624459 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:11.172778 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:11.189685 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:11.189769 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:11.232537 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:11.232556 1043229 cri.go:89] found id: ""
	I1208 01:26:11.232564 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:11.232618 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:11.239125 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:11.239200 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:11.278815 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:11.278835 1043229 cri.go:89] found id: ""
	I1208 01:26:11.278844 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:11.278987 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:11.287212 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:11.287288 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:11.324510 1043229 cri.go:89] found id: ""
	I1208 01:26:11.324533 1043229 logs.go:282] 0 containers: []
	W1208 01:26:11.324542 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:11.324549 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:11.324614 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:11.353853 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:11.353873 1043229 cri.go:89] found id: ""
	I1208 01:26:11.353881 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:11.353939 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:11.358012 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:11.358085 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:11.400954 1043229 cri.go:89] found id: ""
	I1208 01:26:11.401042 1043229 logs.go:282] 0 containers: []
	W1208 01:26:11.401067 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:11.401104 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:11.401201 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:11.437527 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:11.437605 1043229 cri.go:89] found id: ""
	I1208 01:26:11.437629 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:11.437711 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:11.445173 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:11.445297 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:11.487467 1043229 cri.go:89] found id: ""
	I1208 01:26:11.487548 1043229 logs.go:282] 0 containers: []
	W1208 01:26:11.487579 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:11.487600 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:11.487687 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:11.524935 1043229 cri.go:89] found id: ""
	I1208 01:26:11.525022 1043229 logs.go:282] 0 containers: []
	W1208 01:26:11.525046 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:11.525084 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:11.525116 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:11.565573 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:11.565608 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:11.603693 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:11.603736 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:11.699001 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:11.699041 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:11.719065 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:11.719098 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:11.771339 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:11.771376 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:11.818822 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:11.818857 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:11.877788 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:11.877820 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:11.959867 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:11.959900 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:11.959939 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:14.498562 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:14.509107 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:14.509221 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:14.535012 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:14.535094 1043229 cri.go:89] found id: ""
	I1208 01:26:14.535116 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:14.535203 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:14.539024 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:14.539095 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:14.571033 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:14.571057 1043229 cri.go:89] found id: ""
	I1208 01:26:14.571066 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:14.571151 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:14.574977 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:14.575082 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:14.608660 1043229 cri.go:89] found id: ""
	I1208 01:26:14.608684 1043229 logs.go:282] 0 containers: []
	W1208 01:26:14.608693 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:14.608699 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:14.608759 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:14.648092 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:14.648117 1043229 cri.go:89] found id: ""
	I1208 01:26:14.648125 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:14.648182 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:14.652559 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:14.652640 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:14.679194 1043229 cri.go:89] found id: ""
	I1208 01:26:14.679220 1043229 logs.go:282] 0 containers: []
	W1208 01:26:14.679229 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:14.679235 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:14.679296 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:14.710681 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:14.710705 1043229 cri.go:89] found id: ""
	I1208 01:26:14.710713 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:14.710774 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:14.714613 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:14.714737 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:14.744541 1043229 cri.go:89] found id: ""
	I1208 01:26:14.744567 1043229 logs.go:282] 0 containers: []
	W1208 01:26:14.744582 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:14.744589 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:14.744649 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:14.770308 1043229 cri.go:89] found id: ""
	I1208 01:26:14.770332 1043229 logs.go:282] 0 containers: []
	W1208 01:26:14.770341 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:14.770355 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:14.770366 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:14.829777 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:14.829813 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:14.845073 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:14.845103 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:14.911633 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:14.911651 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:14.911667 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:14.946178 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:14.946212 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:14.980022 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:14.980061 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:15.060980 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:15.061174 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:15.101390 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:15.101421 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:15.137206 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:15.137243 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:17.670584 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:17.682602 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:17.682669 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:17.716296 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:17.716317 1043229 cri.go:89] found id: ""
	I1208 01:26:17.716332 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:17.716389 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:17.720772 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:17.720845 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:17.754755 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:17.754776 1043229 cri.go:89] found id: ""
	I1208 01:26:17.754785 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:17.754842 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:17.759289 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:17.759410 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:17.789846 1043229 cri.go:89] found id: ""
	I1208 01:26:17.789914 1043229 logs.go:282] 0 containers: []
	W1208 01:26:17.789937 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:17.789957 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:17.790051 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:17.833043 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:17.833104 1043229 cri.go:89] found id: ""
	I1208 01:26:17.833127 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:17.833214 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:17.837908 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:17.838038 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:17.869939 1043229 cri.go:89] found id: ""
	I1208 01:26:17.870005 1043229 logs.go:282] 0 containers: []
	W1208 01:26:17.870028 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:17.870046 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:17.870134 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:17.909084 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:17.909121 1043229 cri.go:89] found id: ""
	I1208 01:26:17.909131 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:17.909201 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:17.913716 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:17.913801 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:17.943987 1043229 cri.go:89] found id: ""
	I1208 01:26:17.944017 1043229 logs.go:282] 0 containers: []
	W1208 01:26:17.944025 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:17.944031 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:17.944087 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:17.980295 1043229 cri.go:89] found id: ""
	I1208 01:26:17.980324 1043229 logs.go:282] 0 containers: []
	W1208 01:26:17.980333 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:17.980348 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:17.980358 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:18.060423 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:18.060459 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:18.076813 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:18.076844 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:18.170631 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:18.170655 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:18.170669 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:18.209547 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:18.209584 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:18.254434 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:18.254495 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:18.308238 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:18.308277 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:18.392388 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:18.392422 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:18.451563 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:18.451684 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:20.985836 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:20.996292 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:20.996362 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:21.024338 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:21.024360 1043229 cri.go:89] found id: ""
	I1208 01:26:21.024368 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:21.024427 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:21.028376 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:21.028458 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:21.054659 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:21.054681 1043229 cri.go:89] found id: ""
	I1208 01:26:21.054690 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:21.054747 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:21.058351 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:21.058423 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:21.083533 1043229 cri.go:89] found id: ""
	I1208 01:26:21.083556 1043229 logs.go:282] 0 containers: []
	W1208 01:26:21.083566 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:21.083573 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:21.083637 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:21.109116 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:21.109140 1043229 cri.go:89] found id: ""
	I1208 01:26:21.109148 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:21.109205 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:21.112800 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:21.112873 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:21.145556 1043229 cri.go:89] found id: ""
	I1208 01:26:21.145578 1043229 logs.go:282] 0 containers: []
	W1208 01:26:21.145586 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:21.145593 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:21.145651 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:21.172730 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:21.172751 1043229 cri.go:89] found id: ""
	I1208 01:26:21.172762 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:21.172821 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:21.176667 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:21.176757 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:21.203259 1043229 cri.go:89] found id: ""
	I1208 01:26:21.203281 1043229 logs.go:282] 0 containers: []
	W1208 01:26:21.203289 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:21.203296 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:21.203356 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:21.228580 1043229 cri.go:89] found id: ""
	I1208 01:26:21.228603 1043229 logs.go:282] 0 containers: []
	W1208 01:26:21.228611 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:21.228630 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:21.228642 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:21.293357 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:21.293418 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:21.293447 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:21.330535 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:21.330576 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:21.367005 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:21.367073 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:21.402239 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:21.402273 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:21.456209 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:21.456250 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:21.489611 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:21.489641 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:21.524541 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:21.524575 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:21.585141 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:21.585177 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:24.101211 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:24.111769 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:24.111843 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:24.139321 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:24.139393 1043229 cri.go:89] found id: ""
	I1208 01:26:24.139408 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:24.139470 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:24.144604 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:24.144675 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:24.169820 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:24.169841 1043229 cri.go:89] found id: ""
	I1208 01:26:24.169850 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:24.169910 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:24.173847 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:24.173921 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:24.202569 1043229 cri.go:89] found id: ""
	I1208 01:26:24.202595 1043229 logs.go:282] 0 containers: []
	W1208 01:26:24.202604 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:24.202611 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:24.202673 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:24.230299 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:24.230321 1043229 cri.go:89] found id: ""
	I1208 01:26:24.230329 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:24.230387 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:24.234336 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:24.234416 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:24.262412 1043229 cri.go:89] found id: ""
	I1208 01:26:24.262473 1043229 logs.go:282] 0 containers: []
	W1208 01:26:24.262485 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:24.262497 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:24.262560 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:24.288565 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:24.288587 1043229 cri.go:89] found id: ""
	I1208 01:26:24.288595 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:24.288660 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:24.292504 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:24.292582 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:24.321428 1043229 cri.go:89] found id: ""
	I1208 01:26:24.321517 1043229 logs.go:282] 0 containers: []
	W1208 01:26:24.321544 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:24.321564 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:24.321676 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:24.357869 1043229 cri.go:89] found id: ""
	I1208 01:26:24.357901 1043229 logs.go:282] 0 containers: []
	W1208 01:26:24.357914 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:24.357929 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:24.357941 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:24.393747 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:24.393783 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:24.457823 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:24.457866 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:24.473834 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:24.473864 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:24.508347 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:24.508382 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:24.549811 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:24.549844 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:24.579136 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:24.579170 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:24.649640 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:24.649716 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:24.649795 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:24.684037 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:24.684072 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:27.219083 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:27.229836 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:27.229911 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:27.257140 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:27.257163 1043229 cri.go:89] found id: ""
	I1208 01:26:27.257171 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:27.257235 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:27.261130 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:27.261207 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:27.287687 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:27.287710 1043229 cri.go:89] found id: ""
	I1208 01:26:27.287718 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:27.287780 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:27.291960 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:27.292033 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:27.321466 1043229 cri.go:89] found id: ""
	I1208 01:26:27.321498 1043229 logs.go:282] 0 containers: []
	W1208 01:26:27.321508 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:27.321514 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:27.321574 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:27.357370 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:27.357394 1043229 cri.go:89] found id: ""
	I1208 01:26:27.357402 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:27.357461 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:27.361603 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:27.361672 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:27.392555 1043229 cri.go:89] found id: ""
	I1208 01:26:27.392581 1043229 logs.go:282] 0 containers: []
	W1208 01:26:27.392590 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:27.392597 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:27.392656 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:27.421780 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:27.421802 1043229 cri.go:89] found id: ""
	I1208 01:26:27.421811 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:27.421867 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:27.425536 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:27.425606 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:27.453179 1043229 cri.go:89] found id: ""
	I1208 01:26:27.453202 1043229 logs.go:282] 0 containers: []
	W1208 01:26:27.453211 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:27.453217 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:27.453277 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:27.483575 1043229 cri.go:89] found id: ""
	I1208 01:26:27.483599 1043229 logs.go:282] 0 containers: []
	W1208 01:26:27.483607 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:27.483622 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:27.483634 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:27.523733 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:27.523778 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:27.553699 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:27.553736 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:27.588207 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:27.588243 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:27.603610 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:27.603638 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:27.671307 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:27.671326 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:27.671339 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:27.705597 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:27.705644 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:27.742553 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:27.742586 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:27.803161 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:27.803197 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:30.340087 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:30.369975 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:30.370050 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:30.432237 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:30.432261 1043229 cri.go:89] found id: ""
	I1208 01:26:30.432270 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:30.432326 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:30.440370 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:30.440443 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:30.484188 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:30.484212 1043229 cri.go:89] found id: ""
	I1208 01:26:30.484220 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:30.484278 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:30.488290 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:30.488366 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:30.522832 1043229 cri.go:89] found id: ""
	I1208 01:26:30.522860 1043229 logs.go:282] 0 containers: []
	W1208 01:26:30.522868 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:30.522874 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:30.522941 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:30.553390 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:30.553412 1043229 cri.go:89] found id: ""
	I1208 01:26:30.553420 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:30.553479 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:30.557278 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:30.557350 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:30.584297 1043229 cri.go:89] found id: ""
	I1208 01:26:30.584324 1043229 logs.go:282] 0 containers: []
	W1208 01:26:30.584333 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:30.584339 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:30.584405 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:30.620028 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:30.620052 1043229 cri.go:89] found id: ""
	I1208 01:26:30.620060 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:30.620115 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:30.623979 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:30.624050 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:30.654435 1043229 cri.go:89] found id: ""
	I1208 01:26:30.654481 1043229 logs.go:282] 0 containers: []
	W1208 01:26:30.654489 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:30.654495 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:30.654552 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:30.713016 1043229 cri.go:89] found id: ""
	I1208 01:26:30.713044 1043229 logs.go:282] 0 containers: []
	W1208 01:26:30.713054 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:30.713068 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:30.713080 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:30.774111 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:30.774192 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:30.790818 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:30.790843 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:30.831600 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:30.831672 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:30.866067 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:30.866142 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:30.907697 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:30.907774 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:30.946645 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:30.946727 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:31.013260 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:31.013288 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:31.013310 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:31.044605 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:31.044647 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:33.575648 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:33.586017 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:33.586089 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:33.616409 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:33.616435 1043229 cri.go:89] found id: ""
	I1208 01:26:33.616443 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:33.616500 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:33.622561 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:33.622641 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:33.652336 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:33.652362 1043229 cri.go:89] found id: ""
	I1208 01:26:33.652385 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:33.652463 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:33.656389 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:33.656472 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:33.689056 1043229 cri.go:89] found id: ""
	I1208 01:26:33.689083 1043229 logs.go:282] 0 containers: []
	W1208 01:26:33.689092 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:33.689098 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:33.689177 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:33.728671 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:33.728697 1043229 cri.go:89] found id: ""
	I1208 01:26:33.728705 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:33.728766 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:33.733123 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:33.733203 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:33.765719 1043229 cri.go:89] found id: ""
	I1208 01:26:33.765756 1043229 logs.go:282] 0 containers: []
	W1208 01:26:33.765766 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:33.765773 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:33.765831 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:33.797034 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:33.797060 1043229 cri.go:89] found id: ""
	I1208 01:26:33.797068 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:33.797135 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:33.802202 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:33.802288 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:33.828929 1043229 cri.go:89] found id: ""
	I1208 01:26:33.828955 1043229 logs.go:282] 0 containers: []
	W1208 01:26:33.828966 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:33.828973 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:33.829034 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:33.856453 1043229 cri.go:89] found id: ""
	I1208 01:26:33.856529 1043229 logs.go:282] 0 containers: []
	W1208 01:26:33.856553 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:33.856582 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:33.856603 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:33.914280 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:33.914319 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:33.929019 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:33.929087 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:33.968995 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:33.969030 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:34.001636 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:34.001672 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:34.073367 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:34.073388 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:34.073404 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:34.115411 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:34.115448 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:34.162272 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:34.162310 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:34.194568 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:34.194649 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:36.727615 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:36.737962 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:36.738026 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:36.765938 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:36.765958 1043229 cri.go:89] found id: ""
	I1208 01:26:36.765966 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:36.766023 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:36.770419 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:36.770551 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:36.803611 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:36.803685 1043229 cri.go:89] found id: ""
	I1208 01:26:36.803708 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:36.803798 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:36.810644 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:36.810771 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:36.841964 1043229 cri.go:89] found id: ""
	I1208 01:26:36.842039 1043229 logs.go:282] 0 containers: []
	W1208 01:26:36.842062 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:36.842082 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:36.842171 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:36.876341 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:36.876417 1043229 cri.go:89] found id: ""
	I1208 01:26:36.876441 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:36.876552 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:36.882024 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:36.882161 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:36.919620 1043229 cri.go:89] found id: ""
	I1208 01:26:36.919643 1043229 logs.go:282] 0 containers: []
	W1208 01:26:36.919652 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:36.919659 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:36.919722 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:36.959451 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:36.959471 1043229 cri.go:89] found id: ""
	I1208 01:26:36.959479 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:36.959538 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:36.964417 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:36.964510 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:37.001022 1043229 cri.go:89] found id: ""
	I1208 01:26:37.001045 1043229 logs.go:282] 0 containers: []
	W1208 01:26:37.001053 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:37.001060 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:37.001118 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:37.035360 1043229 cri.go:89] found id: ""
	I1208 01:26:37.035383 1043229 logs.go:282] 0 containers: []
	W1208 01:26:37.035393 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:37.035407 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:37.035420 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:37.078152 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:37.078228 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:37.155962 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:37.155999 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:37.229827 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:37.229868 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:37.295910 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:37.295947 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:37.313730 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:37.313762 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:37.401352 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:37.401377 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:37.401390 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:37.435830 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:37.435863 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:37.475506 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:37.475539 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:40.006662 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:40.032659 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:40.032740 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:40.100893 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:40.100916 1043229 cri.go:89] found id: ""
	I1208 01:26:40.100924 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:40.100987 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:40.107785 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:40.107861 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:40.167382 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:40.167453 1043229 cri.go:89] found id: ""
	I1208 01:26:40.167476 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:40.167568 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:40.172775 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:40.172915 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:40.215507 1043229 cri.go:89] found id: ""
	I1208 01:26:40.215581 1043229 logs.go:282] 0 containers: []
	W1208 01:26:40.215606 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:40.215626 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:40.215734 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:40.257385 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:40.257459 1043229 cri.go:89] found id: ""
	I1208 01:26:40.257485 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:40.257581 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:40.262323 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:40.262463 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:40.294551 1043229 cri.go:89] found id: ""
	I1208 01:26:40.294606 1043229 logs.go:282] 0 containers: []
	W1208 01:26:40.294630 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:40.294648 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:40.294735 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:40.331556 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:40.331634 1043229 cri.go:89] found id: ""
	I1208 01:26:40.331656 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:40.331743 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:40.336328 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:40.336477 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:40.383276 1043229 cri.go:89] found id: ""
	I1208 01:26:40.383355 1043229 logs.go:282] 0 containers: []
	W1208 01:26:40.383378 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:40.383397 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:40.383491 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:40.412838 1043229 cri.go:89] found id: ""
	I1208 01:26:40.412913 1043229 logs.go:282] 0 containers: []
	W1208 01:26:40.412936 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:40.412979 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:40.413012 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:40.429949 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:40.430033 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:40.514078 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:40.514164 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:40.514197 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:40.553220 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:40.553295 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:40.587192 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:40.587317 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:40.638310 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:40.638336 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:40.702157 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:40.702239 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:40.771208 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:40.771246 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:40.826940 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:40.827027 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:43.370379 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:43.383700 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:43.383776 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:43.415750 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:43.415772 1043229 cri.go:89] found id: ""
	I1208 01:26:43.415781 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:43.415839 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:43.420122 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:43.420207 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:43.449966 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:43.449986 1043229 cri.go:89] found id: ""
	I1208 01:26:43.449995 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:43.450051 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:43.453925 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:43.454040 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:43.484790 1043229 cri.go:89] found id: ""
	I1208 01:26:43.484817 1043229 logs.go:282] 0 containers: []
	W1208 01:26:43.484825 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:43.484831 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:43.484892 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:43.514973 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:43.514999 1043229 cri.go:89] found id: ""
	I1208 01:26:43.515008 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:43.515072 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:43.518963 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:43.519035 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:43.545093 1043229 cri.go:89] found id: ""
	I1208 01:26:43.545120 1043229 logs.go:282] 0 containers: []
	W1208 01:26:43.545129 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:43.545136 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:43.545197 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:43.573357 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:43.573380 1043229 cri.go:89] found id: ""
	I1208 01:26:43.573393 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:43.573459 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:43.577280 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:43.577365 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:43.603510 1043229 cri.go:89] found id: ""
	I1208 01:26:43.603534 1043229 logs.go:282] 0 containers: []
	W1208 01:26:43.603542 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:43.603549 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:43.603653 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:43.633680 1043229 cri.go:89] found id: ""
	I1208 01:26:43.633705 1043229 logs.go:282] 0 containers: []
	W1208 01:26:43.633714 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:43.633758 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:43.633776 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:43.672090 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:43.672125 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:43.704562 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:43.704591 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:43.743052 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:43.743129 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:43.817111 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:43.817191 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:43.900290 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:43.900308 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:43.900320 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:43.954418 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:43.954596 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:44.007680 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:44.007774 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:44.059576 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:44.059653 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:46.576215 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:46.589799 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:46.589873 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:46.617219 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:46.617244 1043229 cri.go:89] found id: ""
	I1208 01:26:46.617253 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:46.617314 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:46.620919 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:46.620992 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:46.646021 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:46.646044 1043229 cri.go:89] found id: ""
	I1208 01:26:46.646053 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:46.646110 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:46.649845 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:46.649927 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:46.676330 1043229 cri.go:89] found id: ""
	I1208 01:26:46.676354 1043229 logs.go:282] 0 containers: []
	W1208 01:26:46.676363 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:46.676369 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:46.676435 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:46.703106 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:46.703129 1043229 cri.go:89] found id: ""
	I1208 01:26:46.703138 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:46.703195 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:46.706802 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:46.706880 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:46.734304 1043229 cri.go:89] found id: ""
	I1208 01:26:46.734329 1043229 logs.go:282] 0 containers: []
	W1208 01:26:46.734338 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:46.734345 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:46.734412 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:46.760970 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:46.761062 1043229 cri.go:89] found id: ""
	I1208 01:26:46.761084 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:46.761177 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:46.764804 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:46.764875 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:46.792819 1043229 cri.go:89] found id: ""
	I1208 01:26:46.792841 1043229 logs.go:282] 0 containers: []
	W1208 01:26:46.792849 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:46.792856 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:46.792913 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:46.818210 1043229 cri.go:89] found id: ""
	I1208 01:26:46.818233 1043229 logs.go:282] 0 containers: []
	W1208 01:26:46.818242 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:46.818255 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:46.818267 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:46.858962 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:46.858995 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:46.895094 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:46.895124 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:46.941756 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:46.941786 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:46.975390 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:46.975419 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:47.043939 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:47.043979 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:47.119431 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:47.119498 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:47.119525 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:47.172378 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:47.172411 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:47.202599 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:47.202635 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:49.719126 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:49.730120 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:49.730195 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:49.761381 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:49.761405 1043229 cri.go:89] found id: ""
	I1208 01:26:49.761413 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:49.761470 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:49.765395 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:49.765474 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:49.792641 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:49.792665 1043229 cri.go:89] found id: ""
	I1208 01:26:49.792674 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:49.792733 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:49.796475 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:49.796548 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:49.824230 1043229 cri.go:89] found id: ""
	I1208 01:26:49.824256 1043229 logs.go:282] 0 containers: []
	W1208 01:26:49.824265 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:49.824272 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:49.824331 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:49.850995 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:49.851019 1043229 cri.go:89] found id: ""
	I1208 01:26:49.851028 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:49.851088 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:49.854708 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:49.854787 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:49.880347 1043229 cri.go:89] found id: ""
	I1208 01:26:49.880373 1043229 logs.go:282] 0 containers: []
	W1208 01:26:49.880382 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:49.880389 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:49.880451 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:49.907872 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:49.907898 1043229 cri.go:89] found id: ""
	I1208 01:26:49.907907 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:49.907971 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:49.912079 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:49.912161 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:49.938596 1043229 cri.go:89] found id: ""
	I1208 01:26:49.938621 1043229 logs.go:282] 0 containers: []
	W1208 01:26:49.938631 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:49.938638 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:49.938703 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:49.969097 1043229 cri.go:89] found id: ""
	I1208 01:26:49.969123 1043229 logs.go:282] 0 containers: []
	W1208 01:26:49.969132 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:49.969149 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:49.969160 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:50.041915 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:50.041935 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:50.041949 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:50.078293 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:50.078330 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:50.132523 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:50.132608 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:50.178973 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:50.179016 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:50.209627 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:50.209657 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:50.225371 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:50.225398 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:50.262379 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:50.262435 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:50.307528 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:50.307560 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:52.869425 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:52.880093 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:52.880166 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:52.908396 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:52.908429 1043229 cri.go:89] found id: ""
	I1208 01:26:52.908439 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:52.908502 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:52.912613 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:52.912684 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:52.939740 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:52.939764 1043229 cri.go:89] found id: ""
	I1208 01:26:52.939776 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:52.939837 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:52.943486 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:52.943561 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:52.969110 1043229 cri.go:89] found id: ""
	I1208 01:26:52.969137 1043229 logs.go:282] 0 containers: []
	W1208 01:26:52.969145 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:52.969151 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:52.969212 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:52.994637 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:52.994657 1043229 cri.go:89] found id: ""
	I1208 01:26:52.994666 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:52.994726 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:52.998565 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:52.998641 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:53.029146 1043229 cri.go:89] found id: ""
	I1208 01:26:53.029170 1043229 logs.go:282] 0 containers: []
	W1208 01:26:53.029179 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:53.029186 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:53.029252 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:53.055881 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:53.055949 1043229 cri.go:89] found id: ""
	I1208 01:26:53.055972 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:53.056042 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:53.059898 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:53.059980 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:53.085350 1043229 cri.go:89] found id: ""
	I1208 01:26:53.085375 1043229 logs.go:282] 0 containers: []
	W1208 01:26:53.085384 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:53.085390 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:53.085502 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:53.121989 1043229 cri.go:89] found id: ""
	I1208 01:26:53.122018 1043229 logs.go:282] 0 containers: []
	W1208 01:26:53.122027 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:53.122042 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:53.122059 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:53.163448 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:53.163482 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:53.197783 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:53.197812 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:53.226417 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:53.226596 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:53.287402 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:53.287440 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:53.303022 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:53.303054 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:53.371177 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:53.371243 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:53.371270 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:53.405626 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:53.405659 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:53.437637 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:53.437672 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:55.971630 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:55.982110 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:55.982186 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:56.010981 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:56.011015 1043229 cri.go:89] found id: ""
	I1208 01:26:56.011025 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:56.011095 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:56.015541 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:56.015618 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:56.050747 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:56.050772 1043229 cri.go:89] found id: ""
	I1208 01:26:56.050781 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:56.050846 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:56.055144 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:56.055224 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:56.086863 1043229 cri.go:89] found id: ""
	I1208 01:26:56.086893 1043229 logs.go:282] 0 containers: []
	W1208 01:26:56.086910 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:56.086918 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:56.086981 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:56.114728 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:56.114756 1043229 cri.go:89] found id: ""
	I1208 01:26:56.114765 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:56.114827 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:56.119414 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:56.119490 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:56.157967 1043229 cri.go:89] found id: ""
	I1208 01:26:56.157995 1043229 logs.go:282] 0 containers: []
	W1208 01:26:56.158004 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:56.158011 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:56.158069 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:56.187494 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:56.187529 1043229 cri.go:89] found id: ""
	I1208 01:26:56.187537 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:56.187598 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:56.191617 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:56.191722 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:56.217756 1043229 cri.go:89] found id: ""
	I1208 01:26:56.217780 1043229 logs.go:282] 0 containers: []
	W1208 01:26:56.217789 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:56.217796 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:56.217861 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:56.242228 1043229 cri.go:89] found id: ""
	I1208 01:26:56.242307 1043229 logs.go:282] 0 containers: []
	W1208 01:26:56.242331 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:56.242371 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:56.242403 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:56.308904 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:56.308930 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:56.308946 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:56.343523 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:56.343556 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:56.373541 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:56.373578 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:56.431018 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:56.431054 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:56.463314 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:56.463346 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:56.497202 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:56.497242 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:56.530816 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:56.530847 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:26:56.577663 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:56.577689 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:59.093502 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:26:59.105040 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:26:59.105114 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:26:59.147725 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:59.147751 1043229 cri.go:89] found id: ""
	I1208 01:26:59.147759 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:26:59.147815 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:59.152283 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:26:59.152367 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:26:59.180335 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:59.180355 1043229 cri.go:89] found id: ""
	I1208 01:26:59.180362 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:26:59.180476 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:59.184152 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:26:59.184233 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:26:59.210549 1043229 cri.go:89] found id: ""
	I1208 01:26:59.210574 1043229 logs.go:282] 0 containers: []
	W1208 01:26:59.210582 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:26:59.210589 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:26:59.210651 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:26:59.241544 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:59.241563 1043229 cri.go:89] found id: ""
	I1208 01:26:59.241572 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:26:59.241631 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:59.245574 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:26:59.245653 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:26:59.272242 1043229 cri.go:89] found id: ""
	I1208 01:26:59.272270 1043229 logs.go:282] 0 containers: []
	W1208 01:26:59.272279 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:26:59.272286 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:26:59.272372 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:26:59.298634 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:59.298654 1043229 cri.go:89] found id: ""
	I1208 01:26:59.298662 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:26:59.298719 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:26:59.302667 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:26:59.302738 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:26:59.327765 1043229 cri.go:89] found id: ""
	I1208 01:26:59.327844 1043229 logs.go:282] 0 containers: []
	W1208 01:26:59.327868 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:26:59.327880 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:26:59.327940 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:26:59.352724 1043229 cri.go:89] found id: ""
	I1208 01:26:59.352746 1043229 logs.go:282] 0 containers: []
	W1208 01:26:59.352755 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:26:59.352767 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:26:59.352780 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:26:59.367495 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:26:59.367520 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:26:59.432970 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:26:59.433037 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:26:59.433065 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:26:59.465572 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:26:59.465609 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:26:59.495665 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:26:59.495702 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:26:59.557288 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:26:59.557323 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:26:59.595858 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:26:59.595892 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:26:59.631143 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:26:59.631179 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:26:59.668172 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:26:59.668205 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:02.198587 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:02.209209 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:02.209333 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:02.235371 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:02.235408 1043229 cri.go:89] found id: ""
	I1208 01:27:02.235420 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:27:02.235485 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:02.239377 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:27:02.239454 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:02.265423 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:02.265441 1043229 cri.go:89] found id: ""
	I1208 01:27:02.265449 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:27:02.265508 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:02.269217 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:27:02.269289 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:02.297891 1043229 cri.go:89] found id: ""
	I1208 01:27:02.297917 1043229 logs.go:282] 0 containers: []
	W1208 01:27:02.297926 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:27:02.297932 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:02.297990 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:02.325501 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:02.325524 1043229 cri.go:89] found id: ""
	I1208 01:27:02.325532 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:27:02.325591 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:02.329285 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:02.329363 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:02.355159 1043229 cri.go:89] found id: ""
	I1208 01:27:02.355184 1043229 logs.go:282] 0 containers: []
	W1208 01:27:02.355192 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:02.355203 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:02.355296 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:02.386226 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:02.386255 1043229 cri.go:89] found id: ""
	I1208 01:27:02.386264 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:27:02.386323 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:02.390251 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:02.390323 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:02.416905 1043229 cri.go:89] found id: ""
	I1208 01:27:02.416928 1043229 logs.go:282] 0 containers: []
	W1208 01:27:02.416937 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:02.416943 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:02.417003 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:02.446857 1043229 cri.go:89] found id: ""
	I1208 01:27:02.446879 1043229 logs.go:282] 0 containers: []
	W1208 01:27:02.446887 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:02.446901 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:27:02.446913 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:02.483654 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:27:02.483686 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:02.517375 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:27:02.517409 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:02.554900 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:27:02.554932 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:02.588019 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:27:02.588052 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:27:02.617848 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:02.617884 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:02.676820 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:02.676856 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:02.693037 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:02.693076 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:02.755607 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:02.755626 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:27:02.755647 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:05.286583 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:05.298781 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:05.298852 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:05.339630 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:05.339654 1043229 cri.go:89] found id: ""
	I1208 01:27:05.339662 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:27:05.339724 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:05.351064 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:27:05.351154 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:05.391098 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:05.391120 1043229 cri.go:89] found id: ""
	I1208 01:27:05.391134 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:27:05.391191 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:05.394878 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:27:05.394986 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:05.424058 1043229 cri.go:89] found id: ""
	I1208 01:27:05.424081 1043229 logs.go:282] 0 containers: []
	W1208 01:27:05.424090 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:27:05.424163 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:05.424239 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:05.456865 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:05.456885 1043229 cri.go:89] found id: ""
	I1208 01:27:05.456900 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:27:05.456960 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:05.460768 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:05.460838 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:05.486732 1043229 cri.go:89] found id: ""
	I1208 01:27:05.486755 1043229 logs.go:282] 0 containers: []
	W1208 01:27:05.486764 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:05.486770 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:05.486830 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:05.515744 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:05.515767 1043229 cri.go:89] found id: ""
	I1208 01:27:05.515775 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:27:05.515834 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:05.519666 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:05.519740 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:05.546692 1043229 cri.go:89] found id: ""
	I1208 01:27:05.546718 1043229 logs.go:282] 0 containers: []
	W1208 01:27:05.546727 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:05.546733 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:05.546795 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:05.575064 1043229 cri.go:89] found id: ""
	I1208 01:27:05.575090 1043229 logs.go:282] 0 containers: []
	W1208 01:27:05.575099 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:05.575112 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:05.575124 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:05.590495 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:05.590526 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:05.657437 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:05.657457 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:27:05.657471 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:05.691782 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:27:05.691813 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:05.723542 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:27:05.723576 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:05.758831 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:27:05.758863 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:05.789462 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:27:05.789494 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:05.820698 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:27:05.820731 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:27:05.856749 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:05.856786 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:08.420766 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:08.440938 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:08.441021 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:08.476410 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:08.476430 1043229 cri.go:89] found id: ""
	I1208 01:27:08.476449 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:27:08.476507 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:08.480718 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:27:08.480787 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:08.520128 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:08.520147 1043229 cri.go:89] found id: ""
	I1208 01:27:08.520155 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:27:08.520213 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:08.524587 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:27:08.524658 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:08.552292 1043229 cri.go:89] found id: ""
	I1208 01:27:08.552314 1043229 logs.go:282] 0 containers: []
	W1208 01:27:08.552325 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:27:08.552331 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:08.552391 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:08.581115 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:08.581143 1043229 cri.go:89] found id: ""
	I1208 01:27:08.581151 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:27:08.581207 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:08.585092 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:08.585163 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:08.611703 1043229 cri.go:89] found id: ""
	I1208 01:27:08.611734 1043229 logs.go:282] 0 containers: []
	W1208 01:27:08.611742 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:08.611748 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:08.611806 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:08.644810 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:08.644830 1043229 cri.go:89] found id: ""
	I1208 01:27:08.644838 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:27:08.644900 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:08.649087 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:08.649159 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:08.688001 1043229 cri.go:89] found id: ""
	I1208 01:27:08.688029 1043229 logs.go:282] 0 containers: []
	W1208 01:27:08.688037 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:08.688044 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:08.688101 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:08.727763 1043229 cri.go:89] found id: ""
	I1208 01:27:08.727784 1043229 logs.go:282] 0 containers: []
	W1208 01:27:08.727793 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:08.727807 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:08.727823 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:08.745345 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:27:08.745370 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:08.797189 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:27:08.797364 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:08.838501 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:27:08.838583 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:27:08.889345 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:27:08.889384 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:08.943040 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:08.943072 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:09.023825 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:09.023864 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:09.110677 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:09.110698 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:27:09.110712 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:09.154028 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:27:09.154061 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:11.686576 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:11.697418 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:27:11.697493 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:27:11.731316 1043229 cri.go:89] found id: "5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:11.731336 1043229 cri.go:89] found id: ""
	I1208 01:27:11.731344 1043229 logs.go:282] 1 containers: [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771]
	I1208 01:27:11.731402 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:11.736198 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:27:11.736260 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:27:11.769344 1043229 cri.go:89] found id: "46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:11.769409 1043229 cri.go:89] found id: ""
	I1208 01:27:11.769430 1043229 logs.go:282] 1 containers: [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40]
	I1208 01:27:11.769511 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:11.773325 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:27:11.773429 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:27:11.803745 1043229 cri.go:89] found id: ""
	I1208 01:27:11.803811 1043229 logs.go:282] 0 containers: []
	W1208 01:27:11.803834 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:27:11.803851 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:27:11.803935 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:27:11.879502 1043229 cri.go:89] found id: "77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:11.879567 1043229 cri.go:89] found id: ""
	I1208 01:27:11.879588 1043229 logs.go:282] 1 containers: [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd]
	I1208 01:27:11.879688 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:11.906058 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:27:11.906175 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:27:11.940966 1043229 cri.go:89] found id: ""
	I1208 01:27:11.941033 1043229 logs.go:282] 0 containers: []
	W1208 01:27:11.941056 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:27:11.941075 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:27:11.941167 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:27:11.970646 1043229 cri.go:89] found id: "d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:11.970725 1043229 cri.go:89] found id: ""
	I1208 01:27:11.970747 1043229 logs.go:282] 1 containers: [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62]
	I1208 01:27:11.970834 1043229 ssh_runner.go:195] Run: which crictl
	I1208 01:27:11.975955 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:27:11.976096 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:27:12.007112 1043229 cri.go:89] found id: ""
	I1208 01:27:12.007140 1043229 logs.go:282] 0 containers: []
	W1208 01:27:12.007150 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:27:12.007158 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:27:12.007233 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:27:12.045254 1043229 cri.go:89] found id: ""
	I1208 01:27:12.045277 1043229 logs.go:282] 0 containers: []
	W1208 01:27:12.045285 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:27:12.045299 1043229 logs.go:123] Gathering logs for kube-scheduler [77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd] ...
	I1208 01:27:12.045311 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd"
	I1208 01:27:12.099618 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:27:12.099702 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:27:12.137723 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:27:12.137832 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:27:12.170103 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:27:12.170189 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:27:12.257791 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:27:12.257868 1043229 logs.go:123] Gathering logs for kube-apiserver [5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771] ...
	I1208 01:27:12.257896 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771"
	I1208 01:27:12.302821 1043229 logs.go:123] Gathering logs for etcd [46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40] ...
	I1208 01:27:12.302900 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40"
	I1208 01:27:12.348070 1043229 logs.go:123] Gathering logs for kube-controller-manager [d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62] ...
	I1208 01:27:12.348153 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62"
	I1208 01:27:12.401649 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:27:12.401738 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:27:12.469728 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:27:12.469814 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:27:14.989001 1043229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:27:14.999454 1043229 kubeadm.go:602] duration metric: took 4m1.789549802s to restartPrimaryControlPlane
	W1208 01:27:14.999525 1043229 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1208 01:27:14.999596 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 01:27:15.544823 1043229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:27:15.563944 1043229 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:27:15.577319 1043229 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:27:15.577385 1043229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:27:15.590281 1043229 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:27:15.590350 1043229 kubeadm.go:158] found existing configuration files:
	
	I1208 01:27:15.590406 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:27:15.599883 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:27:15.599941 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:27:15.611919 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:27:15.621934 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:27:15.621997 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:27:15.630176 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:27:15.639935 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:27:15.640050 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:27:15.648162 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:27:15.659156 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:27:15.659267 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:27:15.671583 1043229 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:27:15.729484 1043229 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:27:15.729894 1043229 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:27:15.834856 1043229 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:27:15.834925 1043229 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:27:15.834959 1043229 kubeadm.go:319] OS: Linux
	I1208 01:27:15.835002 1043229 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:27:15.835052 1043229 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:27:15.835098 1043229 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:27:15.835144 1043229 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:27:15.835189 1043229 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:27:15.835234 1043229 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:27:15.835277 1043229 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:27:15.835322 1043229 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:27:15.835365 1043229 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:27:15.917240 1043229 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:27:15.917351 1043229 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:27:15.917442 1043229 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:27:25.265628 1043229 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:27:25.268584 1043229 out.go:252]   - Generating certificates and keys ...
	I1208 01:27:25.268681 1043229 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:27:25.268752 1043229 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:27:25.268831 1043229 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:27:25.268895 1043229 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:27:25.268969 1043229 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:27:25.269027 1043229 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:27:25.269095 1043229 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:27:25.269519 1043229 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:27:25.269924 1043229 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:27:25.270263 1043229 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:27:25.270519 1043229 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:27:25.270578 1043229 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:27:26.000475 1043229 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:27:26.281236 1043229 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:27:26.460971 1043229 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:27:26.578924 1043229 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:27:26.938757 1043229 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:27:26.939398 1043229 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:27:26.942037 1043229 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:27:26.945830 1043229 out.go:252]   - Booting up control plane ...
	I1208 01:27:26.945974 1043229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:27:26.946075 1043229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:27:26.946154 1043229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:27:26.970130 1043229 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:27:26.970400 1043229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:27:26.978587 1043229 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:27:26.978986 1043229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:27:26.979180 1043229 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:27:27.114092 1043229 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:27:27.114215 1043229 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:31:27.115246 1043229 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001084097s
	I1208 01:31:27.115289 1043229 kubeadm.go:319] 
	I1208 01:31:27.115348 1043229 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:31:27.115388 1043229 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:31:27.115497 1043229 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:31:27.115507 1043229 kubeadm.go:319] 
	I1208 01:31:27.115612 1043229 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:31:27.115649 1043229 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:31:27.115683 1043229 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:31:27.115691 1043229 kubeadm.go:319] 
	I1208 01:31:27.120544 1043229 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:31:27.120956 1043229 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:31:27.121061 1043229 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:31:27.121283 1043229 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:31:27.121289 1043229 kubeadm.go:319] 
	I1208 01:31:27.121362 1043229 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:31:27.121477 1043229 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001084097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001084097s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:31:27.121559 1043229 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 01:31:27.535771 1043229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:31:27.551780 1043229 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:31:27.551850 1043229 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:31:27.560660 1043229 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:31:27.560685 1043229 kubeadm.go:158] found existing configuration files:
	
	I1208 01:31:27.560742 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:31:27.569159 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:31:27.569230 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:31:27.577307 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:31:27.585294 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:31:27.585386 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:31:27.593408 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:31:27.601880 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:31:27.601978 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:31:27.609945 1043229 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:31:27.618678 1043229 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:31:27.618778 1043229 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:31:27.633902 1043229 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:31:27.679277 1043229 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:31:27.679343 1043229 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:31:27.766925 1043229 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:31:27.766998 1043229 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:31:27.767037 1043229 kubeadm.go:319] OS: Linux
	I1208 01:31:27.767083 1043229 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:31:27.767133 1043229 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:31:27.767183 1043229 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:31:27.767233 1043229 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:31:27.767284 1043229 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:31:27.767340 1043229 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:31:27.767386 1043229 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:31:27.767435 1043229 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:31:27.767484 1043229 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:31:27.842861 1043229 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:31:27.843065 1043229 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:31:27.843210 1043229 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:31:27.849965 1043229 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:31:27.853007 1043229 out.go:252]   - Generating certificates and keys ...
	I1208 01:31:27.853182 1043229 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:31:27.853310 1043229 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:31:27.853430 1043229 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:31:27.853541 1043229 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:31:27.853644 1043229 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:31:27.853755 1043229 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:31:27.853873 1043229 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:31:27.853989 1043229 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:31:27.854108 1043229 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:31:27.854234 1043229 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:31:27.854436 1043229 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:31:27.854530 1043229 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:31:28.268941 1043229 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:31:28.373602 1043229 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:31:28.776416 1043229 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:31:29.287591 1043229 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:31:29.538573 1043229 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:31:29.539383 1043229 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:31:29.542738 1043229 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:31:29.546859 1043229 out.go:252]   - Booting up control plane ...
	I1208 01:31:29.547012 1043229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:31:29.549570 1043229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:31:29.549686 1043229 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:31:29.573877 1043229 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:31:29.573983 1043229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:31:29.582284 1043229 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:31:29.582732 1043229 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:31:29.582776 1043229 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:31:29.742992 1043229 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:31:29.743118 1043229 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:35:29.738830 1043229 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000098972s
	I1208 01:35:29.739133 1043229 kubeadm.go:319] 
	I1208 01:35:29.739212 1043229 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:35:29.739247 1043229 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:35:29.739352 1043229 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:35:29.739357 1043229 kubeadm.go:319] 
	I1208 01:35:29.739461 1043229 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:35:29.739494 1043229 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:35:29.739524 1043229 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:35:29.739528 1043229 kubeadm.go:319] 
	I1208 01:35:29.744330 1043229 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:35:29.744758 1043229 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:35:29.744870 1043229 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:35:29.745138 1043229 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 01:35:29.745148 1043229 kubeadm.go:319] 
	I1208 01:35:29.745217 1043229 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:35:29.745277 1043229 kubeadm.go:403] duration metric: took 12m16.592194891s to StartCluster
	I1208 01:35:29.745328 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:35:29.745397 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:35:29.771833 1043229 cri.go:89] found id: ""
	I1208 01:35:29.771909 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.771933 1043229 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:35:29.771947 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:35:29.772025 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:35:29.797828 1043229 cri.go:89] found id: ""
	I1208 01:35:29.797853 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.797862 1043229 logs.go:284] No container was found matching "etcd"
	I1208 01:35:29.797868 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:35:29.797931 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:35:29.823907 1043229 cri.go:89] found id: ""
	I1208 01:35:29.823931 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.823940 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:35:29.823946 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:35:29.824005 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:35:29.859309 1043229 cri.go:89] found id: ""
	I1208 01:35:29.859331 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.859339 1043229 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:35:29.859345 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:35:29.859403 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:35:29.893525 1043229 cri.go:89] found id: ""
	I1208 01:35:29.893549 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.893557 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:35:29.893564 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:35:29.893623 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:35:29.921576 1043229 cri.go:89] found id: ""
	I1208 01:35:29.921606 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.921615 1043229 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:35:29.921621 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:35:29.921683 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:35:29.947981 1043229 cri.go:89] found id: ""
	I1208 01:35:29.948008 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.948017 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:35:29.948024 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:35:29.948107 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:35:29.974892 1043229 cri.go:89] found id: ""
	I1208 01:35:29.974923 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.974932 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:35:29.974942 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:35:29.974953 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:35:30.011060 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:35:30.011468 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:35:30.153658 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:35:30.153694 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:35:30.171066 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:35:30.171096 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:35:30.243408 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:35:30.243433 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:35:30.243446 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1208 01:35:30.290236 1043229 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:35:30.290316 1043229 out.go:285] * 
	* 
	W1208 01:35:30.290386 1043229 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:35:30.290407 1043229 out.go:285] * 
	* 
	W1208 01:35:30.292657 1043229 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:35:30.298668 1043229 out.go:203] 
	W1208 01:35:30.302519 1043229 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:35:30.302587 1043229 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:35:30.302631 1043229 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:35:30.306399 1043229 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-614992 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-614992 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-614992 version --output=json: exit status 1 (88.96373ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-08 01:35:31.012125425 +0000 UTC m=+5022.248314128
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-614992
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-614992:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e",
	        "Created": "2025-12-08T01:22:26.530845557Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1043686,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:22:58.221887177Z",
	            "FinishedAt": "2025-12-08T01:22:57.075561826Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e/hosts",
	        "LogPath": "/var/lib/docker/containers/9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e/9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e-json.log",
	        "Name": "/kubernetes-upgrade-614992",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-614992:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-614992",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9f4f40d418b080d4403a24871ffd4f014d31cb32d223ef5e6804440517f5da6e",
	                "LowerDir": "/var/lib/docker/overlay2/7910098c26cf60e12996522ce26dcef4cd22527570be335c52b35d62c0f94cd4-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7910098c26cf60e12996522ce26dcef4cd22527570be335c52b35d62c0f94cd4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7910098c26cf60e12996522ce26dcef4cd22527570be335c52b35d62c0f94cd4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7910098c26cf60e12996522ce26dcef4cd22527570be335c52b35d62c0f94cd4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-614992",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-614992/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-614992",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-614992",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-614992",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64cf0138e308b6ac3a84fda928d108e5ce8a7293c1ef3288b0770b7928c0dcbe",
	            "SandboxKey": "/var/run/docker/netns/64cf0138e308",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-614992": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:8c:d7:0d:3c:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "083ac709d6b4638ee8f9aa872dfe4369c09765fdbb357358a6605423d7f324b6",
	                    "EndpointID": "00a2d8b95ab0e0ef6171dfa32d0bcd574613f30701e31c0adfed3d4187de3c79",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-614992",
	                        "9f4f40d418b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-614992 -n kubernetes-upgrade-614992
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-614992 -n kubernetes-upgrade-614992: exit status 2 (380.897541ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-614992 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-475514 sudo cat /etc/kubernetes/kubelet.conf                                                           │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cat /var/lib/kubelet/config.yaml                                                           │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl status docker --all --full --no-pager                                            │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl cat docker --no-pager                                                            │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cat /etc/docker/daemon.json                                                                │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo docker system info                                                                         │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl status cri-docker --all --full --no-pager                                        │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl cat cri-docker --no-pager                                                        │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                   │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cat /usr/lib/systemd/system/cri-docker.service                                             │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cri-dockerd --version                                                                      │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl status containerd --all --full --no-pager                                        │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl cat containerd --no-pager                                                        │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cat /lib/systemd/system/containerd.service                                                 │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo cat /etc/containerd/config.toml                                                            │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo containerd config dump                                                                     │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl status crio --all --full --no-pager                                              │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo systemctl cat crio --no-pager                                                              │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                    │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ ssh     │ -p cilium-475514 sudo crio config                                                                                │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │                     │
	│ delete  │ -p cilium-475514                                                                                                 │ cilium-475514            │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │ 08 Dec 25 01:32 UTC │
	│ start   │ -p force-systemd-env-629029 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd │ force-systemd-env-629029 │ jenkins │ v1.37.0 │ 08 Dec 25 01:32 UTC │ 08 Dec 25 01:33 UTC │
	│ ssh     │ force-systemd-env-629029 ssh cat /etc/containerd/config.toml                                                     │ force-systemd-env-629029 │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:33 UTC │
	│ delete  │ -p force-systemd-env-629029                                                                                      │ force-systemd-env-629029 │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:33 UTC │
	│ start   │ -p cert-expiration-517238 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd     │ cert-expiration-517238   │ jenkins │ v1.37.0 │ 08 Dec 25 01:33 UTC │ 08 Dec 25 01:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:33:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:33:23.828896 1085574 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:33:23.829022 1085574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:33:23.829026 1085574 out.go:374] Setting ErrFile to fd 2...
	I1208 01:33:23.829029 1085574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:33:23.829297 1085574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:33:23.829669 1085574 out.go:368] Setting JSON to false
	I1208 01:33:23.830563 1085574 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":22557,"bootTime":1765135047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:33:23.830622 1085574 start.go:143] virtualization:  
	I1208 01:33:23.834130 1085574 out.go:179] * [cert-expiration-517238] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:33:23.838401 1085574 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:33:23.838505 1085574 notify.go:221] Checking for updates...
	I1208 01:33:23.846619 1085574 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:33:23.849886 1085574 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:33:23.853040 1085574 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:33:23.856216 1085574 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:33:23.859298 1085574 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:33:23.862979 1085574 config.go:182] Loaded profile config "kubernetes-upgrade-614992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:33:23.863080 1085574 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:33:23.907113 1085574 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:33:23.907235 1085574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:33:23.982060 1085574 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:33:23.972206133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:33:23.982162 1085574 docker.go:319] overlay module found
	I1208 01:33:23.985537 1085574 out.go:179] * Using the docker driver based on user configuration
	I1208 01:33:23.988625 1085574 start.go:309] selected driver: docker
	I1208 01:33:23.988633 1085574 start.go:927] validating driver "docker" against <nil>
	I1208 01:33:23.988650 1085574 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:33:23.989420 1085574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:33:24.049285 1085574 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:33:24.039832699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:33:24.049431 1085574 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:33:24.049650 1085574 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 01:33:24.052728 1085574 out.go:179] * Using Docker driver with root privileges
	I1208 01:33:24.055691 1085574 cni.go:84] Creating CNI manager for ""
	I1208 01:33:24.055761 1085574 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:33:24.055773 1085574 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:33:24.055858 1085574 start.go:353] cluster config:
	{Name:cert-expiration-517238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-517238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:33:24.059077 1085574 out.go:179] * Starting "cert-expiration-517238" primary control-plane node in "cert-expiration-517238" cluster
	I1208 01:33:24.062016 1085574 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:33:24.064878 1085574 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:33:24.067757 1085574 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 01:33:24.067798 1085574 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1208 01:33:24.067807 1085574 cache.go:65] Caching tarball of preloaded images
	I1208 01:33:24.067832 1085574 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:33:24.067918 1085574 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:33:24.067928 1085574 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1208 01:33:24.068039 1085574 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/config.json ...
	I1208 01:33:24.068061 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/config.json: {Name:mk0228040b3635b5ead434cd83058a5f431ea4f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:24.088947 1085574 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:33:24.088961 1085574 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:33:24.088984 1085574 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:33:24.089014 1085574 start.go:360] acquireMachinesLock for cert-expiration-517238: {Name:mk3bb84d034eda99fe0ab393ab6197c383a2eb0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:33:24.089121 1085574 start.go:364] duration metric: took 90.552µs to acquireMachinesLock for "cert-expiration-517238"
	I1208 01:33:24.089150 1085574 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-517238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-517238 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:33:24.089214 1085574 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:33:24.094535 1085574 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:33:24.094783 1085574 start.go:159] libmachine.API.Create for "cert-expiration-517238" (driver="docker")
	I1208 01:33:24.094818 1085574 client.go:173] LocalClient.Create starting
	I1208 01:33:24.094929 1085574 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:33:24.094967 1085574 main.go:143] libmachine: Decoding PEM data...
	I1208 01:33:24.094982 1085574 main.go:143] libmachine: Parsing certificate...
	I1208 01:33:24.095041 1085574 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:33:24.095057 1085574 main.go:143] libmachine: Decoding PEM data...
	I1208 01:33:24.095068 1085574 main.go:143] libmachine: Parsing certificate...
	I1208 01:33:24.095449 1085574 cli_runner.go:164] Run: docker network inspect cert-expiration-517238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:33:24.111155 1085574 cli_runner.go:211] docker network inspect cert-expiration-517238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:33:24.111223 1085574 network_create.go:284] running [docker network inspect cert-expiration-517238] to gather additional debugging logs...
	I1208 01:33:24.111242 1085574 cli_runner.go:164] Run: docker network inspect cert-expiration-517238
	W1208 01:33:24.127270 1085574 cli_runner.go:211] docker network inspect cert-expiration-517238 returned with exit code 1
	I1208 01:33:24.127299 1085574 network_create.go:287] error running [docker network inspect cert-expiration-517238]: docker network inspect cert-expiration-517238: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-517238 not found
	I1208 01:33:24.127311 1085574 network_create.go:289] output of [docker network inspect cert-expiration-517238]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-517238 not found
	
	** /stderr **
	I1208 01:33:24.127417 1085574 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:33:24.144240 1085574 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:33:24.144644 1085574 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:33:24.145152 1085574 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:33:24.145456 1085574 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-083ac709d6b4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:6d:7d:a3:14:4c} reservation:<nil>}
	I1208 01:33:24.146028 1085574 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a88140}
	I1208 01:33:24.146046 1085574 network_create.go:124] attempt to create docker network cert-expiration-517238 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:33:24.146109 1085574 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-517238 cert-expiration-517238
	I1208 01:33:24.214587 1085574 network_create.go:108] docker network cert-expiration-517238 192.168.85.0/24 created
	I1208 01:33:24.214610 1085574 kic.go:121] calculated static IP "192.168.85.2" for the "cert-expiration-517238" container
	I1208 01:33:24.214681 1085574 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:33:24.233074 1085574 cli_runner.go:164] Run: docker volume create cert-expiration-517238 --label name.minikube.sigs.k8s.io=cert-expiration-517238 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:33:24.252021 1085574 oci.go:103] Successfully created a docker volume cert-expiration-517238
	I1208 01:33:24.252117 1085574 cli_runner.go:164] Run: docker run --rm --name cert-expiration-517238-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-517238 --entrypoint /usr/bin/test -v cert-expiration-517238:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:33:24.750723 1085574 oci.go:107] Successfully prepared a docker volume cert-expiration-517238
	I1208 01:33:24.750775 1085574 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 01:33:24.750784 1085574 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:33:24.750849 1085574 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-517238:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:33:28.748305 1085574 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-517238:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.99740535s)
	I1208 01:33:28.748327 1085574 kic.go:203] duration metric: took 3.997539899s to extract preloaded images to volume ...
	W1208 01:33:28.748473 1085574 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:33:28.748579 1085574 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:33:28.804739 1085574 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-517238 --name cert-expiration-517238 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-517238 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-517238 --network cert-expiration-517238 --ip 192.168.85.2 --volume cert-expiration-517238:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:33:29.128243 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Running}}
	I1208 01:33:29.159745 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Status}}
	I1208 01:33:29.183333 1085574 cli_runner.go:164] Run: docker exec cert-expiration-517238 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:33:29.241549 1085574 oci.go:144] the created container "cert-expiration-517238" has a running status.
	I1208 01:33:29.241568 1085574 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa...
	I1208 01:33:29.658200 1085574 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:33:29.683811 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Status}}
	I1208 01:33:29.708187 1085574 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:33:29.708198 1085574 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-517238 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:33:29.778786 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Status}}
	I1208 01:33:29.810802 1085574 machine.go:94] provisionDockerMachine start ...
	I1208 01:33:29.810881 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:29.841331 1085574 main.go:143] libmachine: Using SSH client type: native
	I1208 01:33:29.841676 1085574 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1208 01:33:29.841683 1085574 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:33:29.842362 1085574 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:33:32.998151 1085574 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-517238
	
	I1208 01:33:32.998168 1085574 ubuntu.go:182] provisioning hostname "cert-expiration-517238"
	I1208 01:33:32.998233 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:33.019182 1085574 main.go:143] libmachine: Using SSH client type: native
	I1208 01:33:33.019505 1085574 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1208 01:33:33.019513 1085574 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-517238 && echo "cert-expiration-517238" | sudo tee /etc/hostname
	I1208 01:33:33.186558 1085574 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-517238
	
	I1208 01:33:33.186628 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:33.205566 1085574 main.go:143] libmachine: Using SSH client type: native
	I1208 01:33:33.205888 1085574 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33818 <nil> <nil>}
	I1208 01:33:33.205903 1085574 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-517238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-517238/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-517238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:33:33.359006 1085574 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:33:33.359023 1085574 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:33:33.359042 1085574 ubuntu.go:190] setting up certificates
	I1208 01:33:33.359050 1085574 provision.go:84] configureAuth start
	I1208 01:33:33.359131 1085574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-517238
	I1208 01:33:33.377251 1085574 provision.go:143] copyHostCerts
	I1208 01:33:33.377311 1085574 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:33:33.377319 1085574 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:33:33.377427 1085574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:33:33.377516 1085574 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:33:33.377520 1085574 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:33:33.377544 1085574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:33:33.377591 1085574 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:33:33.377594 1085574 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:33:33.377615 1085574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:33:33.377657 1085574 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-517238 san=[127.0.0.1 192.168.85.2 cert-expiration-517238 localhost minikube]
	I1208 01:33:34.042026 1085574 provision.go:177] copyRemoteCerts
	I1208 01:33:34.042090 1085574 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:33:34.042132 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:34.061549 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:34.166461 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:33:34.184159 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1208 01:33:34.201652 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:33:34.220216 1085574 provision.go:87] duration metric: took 861.14222ms to configureAuth
	I1208 01:33:34.220234 1085574 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:33:34.220435 1085574 config.go:182] Loaded profile config "cert-expiration-517238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:33:34.220441 1085574 machine.go:97] duration metric: took 4.409628502s to provisionDockerMachine
	I1208 01:33:34.220446 1085574 client.go:176] duration metric: took 10.125623672s to LocalClient.Create
	I1208 01:33:34.220470 1085574 start.go:167] duration metric: took 10.125687483s to libmachine.API.Create "cert-expiration-517238"
	I1208 01:33:34.220477 1085574 start.go:293] postStartSetup for "cert-expiration-517238" (driver="docker")
	I1208 01:33:34.220485 1085574 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:33:34.220542 1085574 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:33:34.220580 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:34.238883 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:34.348056 1085574 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:33:34.352711 1085574 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:33:34.352729 1085574 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:33:34.352739 1085574 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:33:34.352795 1085574 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:33:34.352897 1085574 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:33:34.352995 1085574 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:33:34.360788 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:33:34.379011 1085574 start.go:296] duration metric: took 158.520874ms for postStartSetup
	I1208 01:33:34.379374 1085574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-517238
	I1208 01:33:34.402994 1085574 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/config.json ...
	I1208 01:33:34.403252 1085574 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:33:34.403289 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:34.420523 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:34.523555 1085574 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:33:34.528499 1085574 start.go:128] duration metric: took 10.439272173s to createHost
	I1208 01:33:34.528515 1085574 start.go:83] releasing machines lock for "cert-expiration-517238", held for 10.439387342s
	I1208 01:33:34.528586 1085574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-517238
	I1208 01:33:34.545549 1085574 ssh_runner.go:195] Run: cat /version.json
	I1208 01:33:34.545594 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:34.545843 1085574 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:33:34.545906 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:34.573531 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:34.574805 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:34.678160 1085574 ssh_runner.go:195] Run: systemctl --version
	I1208 01:33:34.772574 1085574 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:33:34.777120 1085574 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:33:34.777186 1085574 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:33:34.804794 1085574 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:33:34.804808 1085574 start.go:496] detecting cgroup driver to use...
	I1208 01:33:34.804840 1085574 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:33:34.804888 1085574 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:33:34.819594 1085574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:33:34.832837 1085574 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:33:34.832896 1085574 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:33:34.850073 1085574 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:33:34.868693 1085574 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:33:34.990662 1085574 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:33:35.131160 1085574 docker.go:234] disabling docker service ...
	I1208 01:33:35.131234 1085574 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:33:35.166799 1085574 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:33:35.180416 1085574 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:33:35.293257 1085574 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:33:35.409291 1085574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:33:35.421769 1085574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:33:35.435600 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:33:35.444135 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:33:35.453035 1085574 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:33:35.453095 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:33:35.462362 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:33:35.471271 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:33:35.479844 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:33:35.488513 1085574 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:33:35.497054 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:33:35.505884 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:33:35.515627 1085574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:33:35.524781 1085574 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:33:35.532742 1085574 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:33:35.540358 1085574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:33:35.655073 1085574 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:33:35.784958 1085574 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:33:35.785018 1085574 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:33:35.789240 1085574 start.go:564] Will wait 60s for crictl version
	I1208 01:33:35.789297 1085574 ssh_runner.go:195] Run: which crictl
	I1208 01:33:35.792882 1085574 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:33:35.819671 1085574 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:33:35.819729 1085574 ssh_runner.go:195] Run: containerd --version
	I1208 01:33:35.846042 1085574 ssh_runner.go:195] Run: containerd --version
	I1208 01:33:35.880593 1085574 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1208 01:33:35.883650 1085574 cli_runner.go:164] Run: docker network inspect cert-expiration-517238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:33:35.904217 1085574 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:33:35.908737 1085574 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:33:35.919258 1085574 kubeadm.go:884] updating cluster {Name:cert-expiration-517238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-517238 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:33:35.919363 1085574 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 01:33:35.919429 1085574 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:33:35.944947 1085574 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:33:35.944960 1085574 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:33:35.945023 1085574 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:33:35.972699 1085574 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:33:35.972714 1085574 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:33:35.972721 1085574 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1208 01:33:35.972809 1085574 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-517238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-517238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:33:35.972874 1085574 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:33:35.998787 1085574 cni.go:84] Creating CNI manager for ""
	I1208 01:33:35.998798 1085574 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:33:35.998820 1085574 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:33:35.998842 1085574 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-517238 NodeName:cert-expiration-517238 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:33:35.998976 1085574 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "cert-expiration-517238"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:33:35.999054 1085574 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 01:33:36.008582 1085574 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:33:36.008665 1085574 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:33:36.019305 1085574 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1208 01:33:36.033326 1085574 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 01:33:36.047388 1085574 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:33:36.060856 1085574 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:33:36.064566 1085574 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:33:36.075088 1085574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:33:36.201161 1085574 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:33:36.217162 1085574 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238 for IP: 192.168.85.2
	I1208 01:33:36.217173 1085574 certs.go:195] generating shared ca certs ...
	I1208 01:33:36.217188 1085574 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:36.217344 1085574 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:33:36.217392 1085574 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:33:36.217398 1085574 certs.go:257] generating profile certs ...
	I1208 01:33:36.217456 1085574 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/client.key
	I1208 01:33:36.217465 1085574 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/client.crt with IP's: []
	I1208 01:33:36.491791 1085574 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/client.crt ...
	I1208 01:33:36.491809 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/client.crt: {Name:mk892efa15c33a75ec415e7be9f1278c2fe449e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:36.492016 1085574 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/client.key ...
	I1208 01:33:36.492034 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/client.key: {Name:mk7342ec90a6a0c7f66d2818d7ea196199da98ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:36.492128 1085574 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.key.1a9b8360
	I1208 01:33:36.492140 1085574 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.crt.1a9b8360 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:33:36.685223 1085574 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.crt.1a9b8360 ...
	I1208 01:33:36.685239 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.crt.1a9b8360: {Name:mk937d316798c9a0392b97c6dee66629ced67d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:36.685467 1085574 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.key.1a9b8360 ...
	I1208 01:33:36.685476 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.key.1a9b8360: {Name:mk465a431aa7e08c4bb4a1b79a8e17c0682f2f38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:36.685572 1085574 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.crt.1a9b8360 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.crt
	I1208 01:33:36.685658 1085574 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.key.1a9b8360 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.key
	I1208 01:33:36.685730 1085574 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.key
	I1208 01:33:36.685746 1085574 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.crt with IP's: []
	I1208 01:33:37.113999 1085574 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.crt ...
	I1208 01:33:37.114015 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.crt: {Name:mk8e6bf946c6b0358f08f476a364cd33cbe6705c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:37.114201 1085574 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.key ...
	I1208 01:33:37.114208 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.key: {Name:mk17436eefa5677455b17ad2e89939045e727ea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:37.114387 1085574 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:33:37.114426 1085574 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:33:37.114435 1085574 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:33:37.114493 1085574 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:33:37.114520 1085574 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:33:37.114543 1085574 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:33:37.114589 1085574 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:33:37.115204 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:33:37.134779 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:33:37.152928 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:33:37.171564 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:33:37.189788 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1208 01:33:37.209397 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 01:33:37.227462 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:33:37.246049 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/cert-expiration-517238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:33:37.264703 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:33:37.283311 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:33:37.301575 1085574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:33:37.319214 1085574 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:33:37.331583 1085574 ssh_runner.go:195] Run: openssl version
	I1208 01:33:37.337678 1085574 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:33:37.351192 1085574 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:33:37.362078 1085574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:33:37.366173 1085574 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:33:37.366229 1085574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:33:37.408810 1085574 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:33:37.417354 1085574 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:33:37.424761 1085574 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:33:37.432351 1085574 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:33:37.439689 1085574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:33:37.443323 1085574 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:33:37.443379 1085574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:33:37.484619 1085574 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:33:37.492204 1085574 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:33:37.499821 1085574 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:33:37.507482 1085574 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:33:37.516441 1085574 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:33:37.520454 1085574 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:33:37.520520 1085574 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:33:37.562041 1085574 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:33:37.569683 1085574 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:33:37.577174 1085574 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:33:37.580829 1085574 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:33:37.580872 1085574 kubeadm.go:401] StartCluster: {Name:cert-expiration-517238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-517238 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:33:37.580937 1085574 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:33:37.580994 1085574 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:33:37.608288 1085574 cri.go:89] found id: ""
	I1208 01:33:37.608349 1085574 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:33:37.616365 1085574 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:33:37.624426 1085574 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:33:37.624486 1085574 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:33:37.632251 1085574 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:33:37.632270 1085574 kubeadm.go:158] found existing configuration files:
	
	I1208 01:33:37.632326 1085574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:33:37.640189 1085574 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:33:37.640252 1085574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:33:37.648124 1085574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:33:37.656039 1085574 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:33:37.656103 1085574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:33:37.663399 1085574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:33:37.671159 1085574 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:33:37.671216 1085574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:33:37.678415 1085574 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:33:37.686120 1085574 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:33:37.686184 1085574 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:33:37.693889 1085574 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:33:37.738584 1085574 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 01:33:37.738634 1085574 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:33:37.762012 1085574 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:33:37.762078 1085574 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:33:37.762111 1085574 kubeadm.go:319] OS: Linux
	I1208 01:33:37.762156 1085574 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:33:37.762203 1085574 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:33:37.762250 1085574 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:33:37.762297 1085574 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:33:37.762344 1085574 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:33:37.762391 1085574 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:33:37.762473 1085574 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:33:37.762521 1085574 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:33:37.762565 1085574 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:33:37.841059 1085574 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:33:37.841162 1085574 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:33:37.841261 1085574 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:33:37.846983 1085574 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:33:37.853269 1085574 out.go:252]   - Generating certificates and keys ...
	I1208 01:33:37.853365 1085574 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:33:37.853429 1085574 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:33:38.598855 1085574 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:33:38.693086 1085574 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:33:39.325598 1085574 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:33:40.078777 1085574 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:33:40.713141 1085574 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:33:40.713288 1085574 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-517238 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:33:41.162764 1085574 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:33:41.163348 1085574 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-517238 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:33:41.600321 1085574 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:33:42.268249 1085574 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:33:42.712111 1085574 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:33:42.712185 1085574 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:33:43.100911 1085574 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:33:43.407812 1085574 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:33:44.327616 1085574 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:33:44.511605 1085574 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:33:45.640179 1085574 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:33:45.640270 1085574 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:33:45.644620 1085574 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:33:45.647894 1085574 out.go:252]   - Booting up control plane ...
	I1208 01:33:45.647995 1085574 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:33:45.650824 1085574 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:33:45.653639 1085574 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:33:45.671603 1085574 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:33:45.671704 1085574 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:33:45.679178 1085574 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:33:45.679416 1085574 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:33:45.679599 1085574 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:33:45.825525 1085574 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:33:45.825638 1085574 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:33:46.826557 1085574 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001025057s
	I1208 01:33:46.830152 1085574 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 01:33:46.830262 1085574 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1208 01:33:46.830363 1085574 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 01:33:46.830489 1085574 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 01:33:49.334212 1085574 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.503102569s
	I1208 01:33:51.061979 1085574 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.231806436s
	I1208 01:33:52.832397 1085574 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002185782s
	I1208 01:33:52.872427 1085574 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 01:33:52.888946 1085574 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 01:33:52.903904 1085574 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 01:33:52.904154 1085574 kubeadm.go:319] [mark-control-plane] Marking the node cert-expiration-517238 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 01:33:52.928509 1085574 kubeadm.go:319] [bootstrap-token] Using token: q5c2w3.fe9ea1fyhvbp2p1q
	I1208 01:33:52.931527 1085574 out.go:252]   - Configuring RBAC rules ...
	I1208 01:33:52.931668 1085574 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 01:33:52.937009 1085574 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 01:33:52.948182 1085574 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 01:33:52.954635 1085574 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 01:33:52.958999 1085574 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 01:33:52.963316 1085574 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 01:33:53.240514 1085574 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 01:33:53.682031 1085574 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 01:33:54.241141 1085574 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 01:33:54.242647 1085574 kubeadm.go:319] 
	I1208 01:33:54.242714 1085574 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 01:33:54.242718 1085574 kubeadm.go:319] 
	I1208 01:33:54.242794 1085574 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 01:33:54.242796 1085574 kubeadm.go:319] 
	I1208 01:33:54.242820 1085574 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 01:33:54.242877 1085574 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 01:33:54.242926 1085574 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 01:33:54.242929 1085574 kubeadm.go:319] 
	I1208 01:33:54.242981 1085574 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 01:33:54.242984 1085574 kubeadm.go:319] 
	I1208 01:33:54.243030 1085574 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 01:33:54.243033 1085574 kubeadm.go:319] 
	I1208 01:33:54.243083 1085574 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 01:33:54.243157 1085574 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 01:33:54.243224 1085574 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 01:33:54.243227 1085574 kubeadm.go:319] 
	I1208 01:33:54.243309 1085574 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 01:33:54.243385 1085574 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 01:33:54.243387 1085574 kubeadm.go:319] 
	I1208 01:33:54.243470 1085574 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token q5c2w3.fe9ea1fyhvbp2p1q \
	I1208 01:33:54.243572 1085574 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:530bb87f6b7bce2c34e587eb3e9dbf3b51e460a8d6fb3f3266be1a74dec16e58 \
	I1208 01:33:54.243590 1085574 kubeadm.go:319] 	--control-plane 
	I1208 01:33:54.243593 1085574 kubeadm.go:319] 
	I1208 01:33:54.243691 1085574 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 01:33:54.243694 1085574 kubeadm.go:319] 
	I1208 01:33:54.243781 1085574 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token q5c2w3.fe9ea1fyhvbp2p1q \
	I1208 01:33:54.243882 1085574 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:530bb87f6b7bce2c34e587eb3e9dbf3b51e460a8d6fb3f3266be1a74dec16e58 
	I1208 01:33:54.248963 1085574 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 01:33:54.249179 1085574 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:33:54.249282 1085574 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:33:54.249296 1085574 cni.go:84] Creating CNI manager for ""
	I1208 01:33:54.249303 1085574 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:33:54.252388 1085574 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 01:33:54.259882 1085574 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 01:33:54.264580 1085574 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 01:33:54.264591 1085574 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 01:33:54.277654 1085574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 01:33:54.595182 1085574 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 01:33:54.595307 1085574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 01:33:54.595316 1085574 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-517238 minikube.k8s.io/updated_at=2025_12_08T01_33_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=cert-expiration-517238 minikube.k8s.io/primary=true
	I1208 01:33:54.838010 1085574 kubeadm.go:1114] duration metric: took 242.797288ms to wait for elevateKubeSystemPrivileges
	I1208 01:33:54.838040 1085574 ops.go:34] apiserver oom_adj: -16
	I1208 01:33:54.838067 1085574 kubeadm.go:403] duration metric: took 17.25719743s to StartCluster
	I1208 01:33:54.838087 1085574 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:54.838160 1085574 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:33:54.839202 1085574 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:33:54.839473 1085574 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:33:54.839608 1085574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 01:33:54.839877 1085574 config.go:182] Loaded profile config "cert-expiration-517238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:33:54.839912 1085574 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:33:54.839977 1085574 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-517238"
	I1208 01:33:54.839992 1085574 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-517238"
	I1208 01:33:54.840013 1085574 host.go:66] Checking if "cert-expiration-517238" exists ...
	I1208 01:33:54.840672 1085574 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-517238"
	I1208 01:33:54.840689 1085574 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-517238"
	I1208 01:33:54.840746 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Status}}
	I1208 01:33:54.840983 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Status}}
	I1208 01:33:54.844905 1085574 out.go:179] * Verifying Kubernetes components...
	I1208 01:33:54.849175 1085574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:33:54.894293 1085574 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-517238"
	I1208 01:33:54.894321 1085574 host.go:66] Checking if "cert-expiration-517238" exists ...
	I1208 01:33:54.894861 1085574 cli_runner.go:164] Run: docker container inspect cert-expiration-517238 --format={{.State.Status}}
	I1208 01:33:54.906689 1085574 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:33:54.910389 1085574 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:33:54.910401 1085574 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:33:54.910614 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:54.934701 1085574 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:33:54.934721 1085574 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:33:54.934779 1085574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-517238
	I1208 01:33:54.952220 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:54.978756 1085574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33818 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/cert-expiration-517238/id_rsa Username:docker}
	I1208 01:33:55.137419 1085574 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 01:33:55.185671 1085574 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:33:55.231545 1085574 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:33:55.325938 1085574 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:33:55.554969 1085574 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1208 01:33:55.556571 1085574 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:33:55.556709 1085574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:33:55.868864 1085574 api_server.go:72] duration metric: took 1.029336781s to wait for apiserver process to appear ...
	I1208 01:33:55.868876 1085574 api_server.go:88] waiting for apiserver healthz status ...
	I1208 01:33:55.868890 1085574 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1208 01:33:55.883141 1085574 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1208 01:33:55.884234 1085574 api_server.go:141] control plane version: v1.34.2
	I1208 01:33:55.884249 1085574 api_server.go:131] duration metric: took 15.367975ms to wait for apiserver health ...
	I1208 01:33:55.884257 1085574 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 01:33:55.884343 1085574 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1208 01:33:55.887303 1085574 system_pods.go:59] 5 kube-system pods found
	I1208 01:33:55.887323 1085574 system_pods.go:61] "etcd-cert-expiration-517238" [08188e53-1905-4826-876a-5fb67ba5a410] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 01:33:55.887331 1085574 system_pods.go:61] "kube-apiserver-cert-expiration-517238" [1ccfd8f8-f4f6-4370-98ff-d3a21e002c8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 01:33:55.887337 1085574 system_pods.go:61] "kube-controller-manager-cert-expiration-517238" [b9674400-fe6e-4777-8c6a-26236d65ffad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 01:33:55.887347 1085574 system_pods.go:61] "kube-scheduler-cert-expiration-517238" [bdd931bd-333a-444b-bab4-0c7660c5737c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 01:33:55.887351 1085574 system_pods.go:61] "storage-provisioner" [a3899e35-c85c-4970-94d5-472196473900] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1208 01:33:55.887355 1085574 system_pods.go:74] duration metric: took 3.094224ms to wait for pod list to return data ...
	I1208 01:33:55.887365 1085574 kubeadm.go:587] duration metric: took 1.047842673s to wait for: map[apiserver:true system_pods:true]
	I1208 01:33:55.887377 1085574 node_conditions.go:102] verifying NodePressure condition ...
	I1208 01:33:55.888437 1085574 addons.go:530] duration metric: took 1.048519559s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1208 01:33:55.890257 1085574 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1208 01:33:55.890276 1085574 node_conditions.go:123] node cpu capacity is 2
	I1208 01:33:55.890287 1085574 node_conditions.go:105] duration metric: took 2.907098ms to run NodePressure ...
	I1208 01:33:55.890298 1085574 start.go:242] waiting for startup goroutines ...
	I1208 01:33:56.060168 1085574 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-517238" context rescaled to 1 replicas
	I1208 01:33:56.060207 1085574 start.go:247] waiting for cluster config update ...
	I1208 01:33:56.060218 1085574 start.go:256] writing updated cluster config ...
	I1208 01:33:56.060545 1085574 ssh_runner.go:195] Run: rm -f paused
	I1208 01:33:56.145945 1085574 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1208 01:33:56.151179 1085574 out.go:179] * Done! kubectl is now configured to use "cert-expiration-517238" cluster and "default" namespace by default
	I1208 01:35:29.738830 1043229 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000098972s
	I1208 01:35:29.739133 1043229 kubeadm.go:319] 
	I1208 01:35:29.739212 1043229 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:35:29.739247 1043229 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:35:29.739352 1043229 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:35:29.739357 1043229 kubeadm.go:319] 
	I1208 01:35:29.739461 1043229 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:35:29.739494 1043229 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:35:29.739524 1043229 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:35:29.739528 1043229 kubeadm.go:319] 
	I1208 01:35:29.744330 1043229 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:35:29.744758 1043229 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:35:29.744870 1043229 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:35:29.745138 1043229 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1208 01:35:29.745148 1043229 kubeadm.go:319] 
	I1208 01:35:29.745217 1043229 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:35:29.745277 1043229 kubeadm.go:403] duration metric: took 12m16.592194891s to StartCluster
	I1208 01:35:29.745328 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:35:29.745397 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:35:29.771833 1043229 cri.go:89] found id: ""
	I1208 01:35:29.771909 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.771933 1043229 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:35:29.771947 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:35:29.772025 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:35:29.797828 1043229 cri.go:89] found id: ""
	I1208 01:35:29.797853 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.797862 1043229 logs.go:284] No container was found matching "etcd"
	I1208 01:35:29.797868 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:35:29.797931 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:35:29.823907 1043229 cri.go:89] found id: ""
	I1208 01:35:29.823931 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.823940 1043229 logs.go:284] No container was found matching "coredns"
	I1208 01:35:29.823946 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:35:29.824005 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:35:29.859309 1043229 cri.go:89] found id: ""
	I1208 01:35:29.859331 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.859339 1043229 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:35:29.859345 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:35:29.859403 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:35:29.893525 1043229 cri.go:89] found id: ""
	I1208 01:35:29.893549 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.893557 1043229 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:35:29.893564 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:35:29.893623 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:35:29.921576 1043229 cri.go:89] found id: ""
	I1208 01:35:29.921606 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.921615 1043229 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:35:29.921621 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:35:29.921683 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:35:29.947981 1043229 cri.go:89] found id: ""
	I1208 01:35:29.948008 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.948017 1043229 logs.go:284] No container was found matching "kindnet"
	I1208 01:35:29.948024 1043229 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1208 01:35:29.948107 1043229 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1208 01:35:29.974892 1043229 cri.go:89] found id: ""
	I1208 01:35:29.974923 1043229 logs.go:282] 0 containers: []
	W1208 01:35:29.974932 1043229 logs.go:284] No container was found matching "storage-provisioner"
	I1208 01:35:29.974942 1043229 logs.go:123] Gathering logs for container status ...
	I1208 01:35:29.974953 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:35:30.011060 1043229 logs.go:123] Gathering logs for kubelet ...
	I1208 01:35:30.011468 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:35:30.153658 1043229 logs.go:123] Gathering logs for dmesg ...
	I1208 01:35:30.153694 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:35:30.171066 1043229 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:35:30.171096 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:35:30.243408 1043229 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:35:30.243433 1043229 logs.go:123] Gathering logs for containerd ...
	I1208 01:35:30.243446 1043229 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1208 01:35:30.290236 1043229 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:35:30.290316 1043229 out.go:285] * 
	W1208 01:35:30.290386 1043229 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:35:30.290407 1043229 out.go:285] * 
	W1208 01:35:30.292657 1043229 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:35:30.298668 1043229 out.go:203] 
	W1208 01:35:30.302519 1043229 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000098972s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:35:30.302587 1043229 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:35:30.302631 1043229 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:35:30.306399 1043229 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:27:22 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:22.793508772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:27:22 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:22.794862303Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.30426446s"
	Dec 08 01:27:22 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:22.794906702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\""
	Dec 08 01:27:22 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:22.796016612Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\""
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.450493517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.452317395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709"
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.455187339Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.458388896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.459127160Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 663.073468ms"
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.459171091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\""
	Dec 08 01:27:23 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:23.460599371Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\""
	Dec 08 01:27:25 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:25.253566179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:27:25 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:25.256096181Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21140371"
	Dec 08 01:27:25 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:25.258130696Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:27:25 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:25.263466666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:27:25 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:25.264571948Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.803933126s"
	Dec 08 01:27:25 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:27:25.264729783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\""
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.446205547Z" level=info msg="container event discarded" container=5b94717dbcea9dbefd161a39b39cc7fd438697ac09c7e5faa4d486fe44efb771 type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.462105411Z" level=info msg="container event discarded" container=b675f3d4e8bd66acb5a2f4950a92b58a9448b1a855997f6c80d034fe37758adf type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.476403241Z" level=info msg="container event discarded" container=d3a977f7e025f97a6113ad94ad3c7cb25c0b7be0a19483f32407af234ccbdf62 type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.476471754Z" level=info msg="container event discarded" container=1625977029fc285a017c3cfb8da4ffe2708e4765f3728a445bda6e3e38c7f45a type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.509661764Z" level=info msg="container event discarded" container=46a9edf98c1e82aaf642671ba53b0444f7e34dd85e4144307ef4e1cb0e2cdd40 type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.509722934Z" level=info msg="container event discarded" container=250c4a5eb9d8c8b351760639c387f10321ce4326a8ca608893f827262359bcd8 type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.536928344Z" level=info msg="container event discarded" container=77f37482665ebe04cef6910dc26709c599a054d4f74ec68699028c8269ce53bd type=CONTAINER_DELETED_EVENT
	Dec 08 01:32:15 kubernetes-upgrade-614992 containerd[554]: time="2025-12-08T01:32:15.536989005Z" level=info msg="container event discarded" container=318026aaa427babdca7a720dcc2002b5077bd95d9aa40388bda03cd95ae5c90f type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:35:32 up  6:18,  0 user,  load average: 0.97, 2.01, 2.23
	Linux kubernetes-upgrade-614992 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:35:29 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:35:29 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 01:35:29 kubernetes-upgrade-614992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:29 kubernetes-upgrade-614992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:29 kubernetes-upgrade-614992 kubelet[14223]: E1208 01:35:29.910137   14223 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:35:29 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:35:29 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:35:30 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 01:35:30 kubernetes-upgrade-614992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:30 kubernetes-upgrade-614992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:30 kubernetes-upgrade-614992 kubelet[14289]: E1208 01:35:30.661226   14289 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:35:30 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:35:30 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:35:31 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 01:35:31 kubernetes-upgrade-614992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:31 kubernetes-upgrade-614992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:31 kubernetes-upgrade-614992 kubelet[14308]: E1208 01:35:31.365858   14308 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:35:31 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:35:31 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:35:32 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 08 01:35:32 kubernetes-upgrade-614992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:32 kubernetes-upgrade-614992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:35:32 kubernetes-upgrade-614992 kubelet[14406]: E1208 01:35:32.157378   14406 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:35:32 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:35:32 kubernetes-upgrade-614992 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-614992 -n kubernetes-upgrade-614992
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-614992 -n kubernetes-upgrade-614992: exit status 2 (386.351572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-614992" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-614992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-614992
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-614992: (2.966031086s)
--- FAIL: TestKubernetesUpgrade (795.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (511.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m29.742473301s)

                                                
                                                
-- stdout --
	* [no-preload-536520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-536520" primary control-plane node in "no-preload-536520" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:37:07.000245 1096912 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:37:07.000373 1096912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:37:07.000385 1096912 out.go:374] Setting ErrFile to fd 2...
	I1208 01:37:07.000390 1096912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:37:07.000797 1096912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:37:07.001339 1096912 out.go:368] Setting JSON to false
	I1208 01:37:07.003072 1096912 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":22780,"bootTime":1765135047,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:37:07.003166 1096912 start.go:143] virtualization:  
	I1208 01:37:07.007228 1096912 out.go:179] * [no-preload-536520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:37:07.011878 1096912 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:37:07.011928 1096912 notify.go:221] Checking for updates...
	I1208 01:37:07.015401 1096912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:37:07.018919 1096912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:37:07.022155 1096912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:37:07.025253 1096912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:37:07.028407 1096912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:37:07.032171 1096912 config.go:182] Loaded profile config "old-k8s-version-895688": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1208 01:37:07.032281 1096912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:37:07.059813 1096912 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:37:07.059935 1096912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:37:07.128079 1096912 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:37:07.117638056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:37:07.128206 1096912 docker.go:319] overlay module found
	I1208 01:37:07.131636 1096912 out.go:179] * Using the docker driver based on user configuration
	I1208 01:37:07.134824 1096912 start.go:309] selected driver: docker
	I1208 01:37:07.134851 1096912 start.go:927] validating driver "docker" against <nil>
	I1208 01:37:07.134866 1096912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:37:07.135612 1096912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:37:07.207356 1096912 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:37:07.19788989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:37:07.207513 1096912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 01:37:07.207741 1096912 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:37:07.210625 1096912 out.go:179] * Using Docker driver with root privileges
	I1208 01:37:07.213426 1096912 cni.go:84] Creating CNI manager for ""
	I1208 01:37:07.213499 1096912 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:37:07.213514 1096912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:37:07.213600 1096912 start.go:353] cluster config:
	{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:37:07.216680 1096912 out.go:179] * Starting "no-preload-536520" primary control-plane node in "no-preload-536520" cluster
	I1208 01:37:07.219579 1096912 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:37:07.222596 1096912 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:37:07.225570 1096912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:37:07.225594 1096912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:37:07.225731 1096912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:37:07.225767 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json: {Name:mka967444509c5ac207aea6f62a5ad5839c0e193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:07.225962 1096912 cache.go:107] acquiring lock: {Name:mk26e7e88ac6993c5141f2d02121dfa2fc547fd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.226130 1096912 cache.go:107] acquiring lock: {Name:mk597bd9b4cd05f2d1a0093859d8b23b8ea1cd1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.226328 1096912 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:37:07.226352 1096912 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 395.031µs
	I1208 01:37:07.226372 1096912 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:37:07.226390 1096912 cache.go:107] acquiring lock: {Name:mk0f1b4d6e089d68a7c2b058d311e225652853b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.226529 1096912 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:07.226927 1096912 cache.go:107] acquiring lock: {Name:mka22e7ada81429241ca2443bce21a3f31b8eb66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.227041 1096912 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:07.227275 1096912 cache.go:107] acquiring lock: {Name:mkfea4ee3c261ad6c1d7efee63fc672216a4c310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.227372 1096912 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:07.228166 1096912 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:07.228435 1096912 cache.go:107] acquiring lock: {Name:mk98329aaba04bc9ea4839996e52989df0918014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.228504 1096912 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:37:07.228518 1096912 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 91.783µs
	I1208 01:37:07.228527 1096912 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:37:07.228544 1096912 cache.go:107] acquiring lock: {Name:mk8813c8ba18f703b4246d4ffd8656e53b0f2ec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.228585 1096912 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:37:07.228596 1096912 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 53.35µs
	I1208 01:37:07.228603 1096912 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:37:07.228626 1096912 cache.go:107] acquiring lock: {Name:mk58db1a89606bc77924fd68a726167dcd840a38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.228702 1096912 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:07.230717 1096912 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:07.232199 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:07.232833 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:07.233137 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:07.233395 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:07.256412 1096912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:37:07.256436 1096912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:37:07.256457 1096912 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:37:07.256487 1096912 start.go:360] acquireMachinesLock for no-preload-536520: {Name:mkcfe59c9f9ccdd77be288a5dfb4e3b57f6ad839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:37:07.256604 1096912 start.go:364] duration metric: took 98.183µs to acquireMachinesLock for "no-preload-536520"
	I1208 01:37:07.256635 1096912 start.go:93] Provisioning new machine with config: &{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:37:07.256718 1096912 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:37:07.260308 1096912 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:37:07.260549 1096912 start.go:159] libmachine.API.Create for "no-preload-536520" (driver="docker")
	I1208 01:37:07.260583 1096912 client.go:173] LocalClient.Create starting
	I1208 01:37:07.260642 1096912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:37:07.260672 1096912 main.go:143] libmachine: Decoding PEM data...
	I1208 01:37:07.260693 1096912 main.go:143] libmachine: Parsing certificate...
	I1208 01:37:07.260757 1096912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:37:07.260774 1096912 main.go:143] libmachine: Decoding PEM data...
	I1208 01:37:07.260785 1096912 main.go:143] libmachine: Parsing certificate...
	I1208 01:37:07.261159 1096912 cli_runner.go:164] Run: docker network inspect no-preload-536520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:37:07.279380 1096912 cli_runner.go:211] docker network inspect no-preload-536520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:37:07.279462 1096912 network_create.go:284] running [docker network inspect no-preload-536520] to gather additional debugging logs...
	I1208 01:37:07.279482 1096912 cli_runner.go:164] Run: docker network inspect no-preload-536520
	W1208 01:37:07.296974 1096912 cli_runner.go:211] docker network inspect no-preload-536520 returned with exit code 1
	I1208 01:37:07.297009 1096912 network_create.go:287] error running [docker network inspect no-preload-536520]: docker network inspect no-preload-536520: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-536520 not found
	I1208 01:37:07.297022 1096912 network_create.go:289] output of [docker network inspect no-preload-536520]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-536520 not found
	
	** /stderr **
	I1208 01:37:07.297123 1096912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:37:07.315726 1096912 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:37:07.316145 1096912 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:37:07.316616 1096912 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:37:07.316954 1096912 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a5d2ad7b018b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:2a:ce:ac:7f:15:a9} reservation:<nil>}
	I1208 01:37:07.317516 1096912 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b877c0}
	I1208 01:37:07.317547 1096912 network_create.go:124] attempt to create docker network no-preload-536520 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1208 01:37:07.317605 1096912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-536520 no-preload-536520
	I1208 01:37:07.392719 1096912 network_create.go:108] docker network no-preload-536520 192.168.85.0/24 created
	I1208 01:37:07.392750 1096912 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-536520" container
	I1208 01:37:07.392819 1096912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:37:07.409353 1096912 cli_runner.go:164] Run: docker volume create no-preload-536520 --label name.minikube.sigs.k8s.io=no-preload-536520 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:37:07.431946 1096912 oci.go:103] Successfully created a docker volume no-preload-536520
	I1208 01:37:07.432044 1096912 cli_runner.go:164] Run: docker run --rm --name no-preload-536520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-536520 --entrypoint /usr/bin/test -v no-preload-536520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:37:07.571245 1096912 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:37:07.606883 1096912 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:37:07.630806 1096912 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:37:07.699028 1096912 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:37:07.780852 1096912 cache.go:162] opening:  /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:37:08.067842 1096912 cache.go:157] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:37:08.067941 1096912 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 841.548631ms
	I1208 01:37:08.067970 1096912 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:37:08.136016 1096912 oci.go:107] Successfully prepared a docker volume no-preload-536520
	I1208 01:37:08.136061 1096912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1208 01:37:08.136193 1096912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:37:08.136312 1096912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:37:08.202832 1096912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-536520 --name no-preload-536520 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-536520 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-536520 --network no-preload-536520 --ip 192.168.85.2 --volume no-preload-536520:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:37:08.585547 1096912 cache.go:157] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:37:08.585577 1096912 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 1.35830699s
	I1208 01:37:08.585594 1096912 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:37:08.635135 1096912 cache.go:157] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:37:08.635173 1096912 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.406546204s
	I1208 01:37:08.635187 1096912 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:37:08.646608 1096912 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Running}}
	I1208 01:37:08.674639 1096912 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:37:08.741931 1096912 cli_runner.go:164] Run: docker exec no-preload-536520 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:37:08.769292 1096912 cache.go:157] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:37:08.769323 1096912 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 1.542403251s
	I1208 01:37:08.769336 1096912 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:37:08.847065 1096912 oci.go:144] the created container "no-preload-536520" has a running status.
	I1208 01:37:08.847113 1096912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa...
	I1208 01:37:08.947921 1096912 cache.go:157] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:37:08.947966 1096912 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 1.721832284s
	I1208 01:37:08.947987 1096912 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:37:08.948000 1096912 cache.go:87] Successfully saved all images to host disk.
	I1208 01:37:09.470906 1096912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:37:09.499790 1096912 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:37:09.522762 1096912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:37:09.522783 1096912 kic_runner.go:114] Args: [docker exec --privileged no-preload-536520 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:37:09.579455 1096912 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:37:09.598221 1096912 machine.go:94] provisionDockerMachine start ...
	I1208 01:37:09.598326 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:09.628139 1096912 main.go:143] libmachine: Using SSH client type: native
	I1208 01:37:09.628481 1096912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1208 01:37:09.628492 1096912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:37:09.629220 1096912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:37:12.786373 1096912 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:37:12.786397 1096912 ubuntu.go:182] provisioning hostname "no-preload-536520"
	I1208 01:37:12.786502 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:12.805939 1096912 main.go:143] libmachine: Using SSH client type: native
	I1208 01:37:12.806257 1096912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1208 01:37:12.806275 1096912 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-536520 && echo "no-preload-536520" | sudo tee /etc/hostname
	I1208 01:37:12.977608 1096912 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:37:12.977724 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:13.002874 1096912 main.go:143] libmachine: Using SSH client type: native
	I1208 01:37:13.003208 1096912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33833 <nil> <nil>}
	I1208 01:37:13.003225 1096912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-536520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-536520/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-536520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:37:13.166980 1096912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:37:13.167076 1096912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:37:13.167117 1096912 ubuntu.go:190] setting up certificates
	I1208 01:37:13.167144 1096912 provision.go:84] configureAuth start
	I1208 01:37:13.167224 1096912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:37:13.187202 1096912 provision.go:143] copyHostCerts
	I1208 01:37:13.187275 1096912 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:37:13.187284 1096912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:37:13.187437 1096912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:37:13.187550 1096912 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:37:13.187561 1096912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:37:13.187592 1096912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:37:13.187649 1096912 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:37:13.187658 1096912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:37:13.187685 1096912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:37:13.187737 1096912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.no-preload-536520 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-536520]
	I1208 01:37:13.419914 1096912 provision.go:177] copyRemoteCerts
	I1208 01:37:13.419985 1096912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:37:13.420038 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:13.439340 1096912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:37:13.547173 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:37:13.567213 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:37:13.592146 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:37:13.615956 1096912 provision.go:87] duration metric: took 448.764965ms to configureAuth
	I1208 01:37:13.615986 1096912 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:37:13.616217 1096912 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:37:13.616225 1096912 machine.go:97] duration metric: took 4.017984901s to provisionDockerMachine
	I1208 01:37:13.616235 1096912 client.go:176] duration metric: took 6.355642663s to LocalClient.Create
	I1208 01:37:13.616257 1096912 start.go:167] duration metric: took 6.355706377s to libmachine.API.Create "no-preload-536520"
	I1208 01:37:13.616266 1096912 start.go:293] postStartSetup for "no-preload-536520" (driver="docker")
	I1208 01:37:13.616281 1096912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:37:13.616349 1096912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:37:13.616402 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:13.653024 1096912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:37:13.766262 1096912 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:37:13.770205 1096912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:37:13.770236 1096912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:37:13.770248 1096912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:37:13.770323 1096912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:37:13.770561 1096912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:37:13.770709 1096912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:37:13.779645 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:37:13.801297 1096912 start.go:296] duration metric: took 185.011733ms for postStartSetup
	I1208 01:37:13.801739 1096912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:37:13.819746 1096912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:37:13.820033 1096912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:37:13.820073 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:13.838027 1096912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:37:13.939590 1096912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:37:13.944551 1096912 start.go:128] duration metric: took 6.687817485s to createHost
	I1208 01:37:13.944587 1096912 start.go:83] releasing machines lock for "no-preload-536520", held for 6.687971407s
	I1208 01:37:13.944683 1096912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:37:13.962567 1096912 ssh_runner.go:195] Run: cat /version.json
	I1208 01:37:13.962619 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:13.962622 1096912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:37:13.962682 1096912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:37:13.985708 1096912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:37:14.003826 1096912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33833 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:37:14.090719 1096912 ssh_runner.go:195] Run: systemctl --version
	I1208 01:37:14.187580 1096912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:37:14.192527 1096912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:37:14.192609 1096912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:37:14.221440 1096912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:37:14.221525 1096912 start.go:496] detecting cgroup driver to use...
	I1208 01:37:14.221592 1096912 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:37:14.221692 1096912 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:37:14.237913 1096912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:37:14.251765 1096912 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:37:14.251834 1096912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:37:14.270826 1096912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:37:14.291237 1096912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:37:14.431417 1096912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:37:14.562582 1096912 docker.go:234] disabling docker service ...
	I1208 01:37:14.562668 1096912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:37:14.588162 1096912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:37:14.602020 1096912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:37:14.718852 1096912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:37:14.859871 1096912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:37:14.873292 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:37:14.888727 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:37:14.897976 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:37:14.907369 1096912 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:37:14.907442 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:37:14.916867 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:37:14.926251 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:37:14.935268 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:37:14.944579 1096912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:37:14.954017 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:37:14.963310 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:37:14.972885 1096912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:37:14.984156 1096912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:37:14.993255 1096912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:37:15.002277 1096912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:37:15.179692 1096912 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:37:15.289543 1096912 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:37:15.289627 1096912 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:37:15.294231 1096912 start.go:564] Will wait 60s for crictl version
	I1208 01:37:15.294311 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.298429 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:37:15.346652 1096912 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:37:15.346735 1096912 ssh_runner.go:195] Run: containerd --version
	I1208 01:37:15.377503 1096912 ssh_runner.go:195] Run: containerd --version
	I1208 01:37:15.410596 1096912 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:37:15.413528 1096912 cli_runner.go:164] Run: docker network inspect no-preload-536520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:37:15.434269 1096912 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:37:15.438489 1096912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:37:15.449824 1096912 kubeadm.go:884] updating cluster {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:37:15.449953 1096912 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:37:15.450038 1096912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:37:15.480308 1096912 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1208 01:37:15.480332 1096912 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1208 01:37:15.480391 1096912 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:15.480607 1096912 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:15.480701 1096912 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:15.480784 1096912 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:15.480885 1096912 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:15.480979 1096912 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1208 01:37:15.481062 1096912 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:15.481162 1096912 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.482192 1096912 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1208 01:37:15.482290 1096912 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.482327 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:15.482363 1096912 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:15.482581 1096912 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:15.482637 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:15.482701 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:15.482795 1096912 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:15.780591 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1208 01:37:15.780698 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.805240 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1208 01:37:15.805321 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:15.805540 1096912 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1208 01:37:15.805570 1096912 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.805602 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.813227 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1208 01:37:15.813363 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:15.813669 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1208 01:37:15.813747 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:15.818637 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1208 01:37:15.818706 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:15.820750 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1208 01:37:15.820824 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1208 01:37:15.831182 1096912 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1208 01:37:15.831228 1096912 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:15.831189 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.831273 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.887819 1096912 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1208 01:37:15.887868 1096912 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:15.887921 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.888009 1096912 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1208 01:37:15.888030 1096912 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:15.888053 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.894248 1096912 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1208 01:37:15.894287 1096912 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:15.894318 1096912 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1208 01:37:15.894336 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.894345 1096912 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1208 01:37:15.894381 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.894552 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.894643 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:15.894851 1096912 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1208 01:37:15.894893 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:15.897391 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:15.897420 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:15.982415 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:15.982545 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:15.982604 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:37:15.982665 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1208 01:37:15.982764 1096912 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1208 01:37:15.982795 1096912 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:15.982823 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:15.986063 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:15.986143 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:16.088933 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:16.089024 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:16.089088 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1208 01:37:16.089128 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1208 01:37:16.089092 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:37:16.089202 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:37:16.089252 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1208 01:37:16.089303 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1208 01:37:16.187613 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1208 01:37:16.187679 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:16.187780 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1208 01:37:16.187781 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:37:16.187857 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1208 01:37:16.187881 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1208 01:37:16.187933 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1208 01:37:16.187960 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1208 01:37:16.187989 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:37:16.188014 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1208 01:37:16.188078 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:37:16.283335 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1208 01:37:16.283451 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1208 01:37:16.283593 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1208 01:37:16.283674 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1208 01:37:16.283776 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:37:16.283849 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1208 01:37:16.283891 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1208 01:37:16.283964 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1208 01:37:16.284048 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1208 01:37:16.284117 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1208 01:37:16.284159 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1208 01:37:16.402886 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1208 01:37:16.402982 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1208 01:37:16.403091 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1208 01:37:16.403234 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:37:16.403917 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1208 01:37:16.404007 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1208 01:37:16.465400 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1208 01:37:16.465454 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1208 01:37:16.520786 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1208 01:37:16.521003 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1208 01:37:16.862155 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1208 01:37:16.862244 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1208 01:37:16.862324 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	W1208 01:37:16.953517 1096912 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1208 01:37:16.953766 1096912 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1208 01:37:16.953862 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:18.209754 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5: (1.255873544s)
	I1208 01:37:18.209796 1096912 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1208 01:37:18.209826 1096912 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:18.209880 1096912 ssh_runner.go:195] Run: which crictl
	I1208 01:37:18.209688 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.347318001s)
	I1208 01:37:18.209976 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1208 01:37:18.210002 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:37:18.210067 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1208 01:37:19.255164 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.045067644s)
	I1208 01:37:19.255243 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1208 01:37:19.255277 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:37:19.255190 1096912 ssh_runner.go:235] Completed: which crictl: (1.045273053s)
	I1208 01:37:19.255403 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:19.255334 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1208 01:37:20.269091 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.013548807s)
	I1208 01:37:20.269121 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1208 01:37:20.269138 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:37:20.269188 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1208 01:37:20.269253 1096912 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.013793559s)
	I1208 01:37:20.269310 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:21.770594 1096912 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.501257777s)
	I1208 01:37:21.770674 1096912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:37:21.770736 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.501532347s)
	I1208 01:37:21.770747 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1208 01:37:21.770762 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:37:21.770783 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1208 01:37:23.383526 1096912 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.612829216s)
	I1208 01:37:23.383584 1096912 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1208 01:37:23.383675 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:37:23.383800 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.61300484s)
	I1208 01:37:23.383813 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1208 01:37:23.383828 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:37:23.383867 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1208 01:37:23.391394 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1208 01:37:23.391452 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1208 01:37:25.512732 1096912 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.128841597s)
	I1208 01:37:25.512756 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1208 01:37:25.512773 1096912 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:37:25.512825 1096912 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1208 01:37:25.877277 1096912 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1208 01:37:25.877310 1096912 cache_images.go:125] Successfully loaded all cached images
	I1208 01:37:25.877316 1096912 cache_images.go:94] duration metric: took 10.39696194s to LoadCachedImages
	I1208 01:37:25.877328 1096912 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:37:25.877420 1096912 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-536520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:37:25.877480 1096912 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:37:25.901981 1096912 cni.go:84] Creating CNI manager for ""
	I1208 01:37:25.902059 1096912 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:37:25.902094 1096912 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:37:25.902150 1096912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-536520 NodeName:no-preload-536520 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:37:25.902299 1096912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-536520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:37:25.902389 1096912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:37:25.910170 1096912 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1208 01:37:25.910236 1096912 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:37:25.917966 1096912 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1208 01:37:25.918064 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1208 01:37:25.918226 1096912 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet
	I1208 01:37:25.918729 1096912 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm
	I1208 01:37:25.922658 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1208 01:37:25.922696 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1208 01:37:26.961136 1096912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:37:26.975981 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1208 01:37:26.980131 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1208 01:37:26.980216 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1208 01:37:27.133499 1096912 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1208 01:37:27.148797 1096912 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1208 01:37:27.148877 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1208 01:37:27.573053 1096912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:37:27.581628 1096912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:37:27.594842 1096912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:37:27.608897 1096912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 01:37:27.622470 1096912 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:37:27.626138 1096912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:37:27.635938 1096912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:37:27.753553 1096912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:37:27.771003 1096912 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520 for IP: 192.168.85.2
	I1208 01:37:27.771079 1096912 certs.go:195] generating shared ca certs ...
	I1208 01:37:27.771111 1096912 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:27.771299 1096912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:37:27.771379 1096912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:37:27.771403 1096912 certs.go:257] generating profile certs ...
	I1208 01:37:27.771478 1096912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.key
	I1208 01:37:27.771514 1096912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.crt with IP's: []
	I1208 01:37:28.533601 1096912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.crt ...
	I1208 01:37:28.533634 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.crt: {Name:mk44b293d2fb4c2bca549c8e257c179fdbde37a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:28.533833 1096912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.key ...
	I1208 01:37:28.533846 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.key: {Name:mkb0e2ab50c15a2a381e74bb3f4cd6347d8500ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:28.533940 1096912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035
	I1208 01:37:28.533956 1096912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt.759f0035 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1208 01:37:28.649079 1096912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt.759f0035 ...
	I1208 01:37:28.649112 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt.759f0035: {Name:mkd712afdc27bee6347d0592e95aa9213bcfb9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:28.649315 1096912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035 ...
	I1208 01:37:28.649332 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035: {Name:mk78f684fac48940b784042c46fa8b682bc50d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:28.649433 1096912 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt.759f0035 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt
	I1208 01:37:28.649522 1096912 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key
	I1208 01:37:28.649595 1096912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key
	I1208 01:37:28.649613 1096912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt with IP's: []
	I1208 01:37:28.938028 1096912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt ...
	I1208 01:37:28.938061 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt: {Name:mkd3040073d7837c734ef03d694411f5bf594007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:28.938247 1096912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key ...
	I1208 01:37:28.938262 1096912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key: {Name:mkb3f6317e1a4982c2c5c27a160a07b509b5bf5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:37:28.938495 1096912 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:37:28.938543 1096912 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:37:28.938557 1096912 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:37:28.938586 1096912 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:37:28.938614 1096912 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:37:28.938643 1096912 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:37:28.938691 1096912 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:37:28.939318 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:37:28.957859 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:37:28.976518 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:37:28.994439 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:37:29.019077 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:37:29.036879 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:37:29.054091 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:37:29.071433 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:37:29.089142 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:37:29.106954 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:37:29.124909 1096912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:37:29.142343 1096912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:37:29.156777 1096912 ssh_runner.go:195] Run: openssl version
	I1208 01:37:29.163394 1096912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:37:29.170742 1096912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:37:29.178116 1096912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:37:29.182126 1096912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:37:29.182219 1096912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:37:29.223367 1096912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:37:29.230961 1096912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:37:29.238346 1096912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:37:29.245685 1096912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:37:29.253437 1096912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:37:29.257046 1096912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:37:29.257135 1096912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:37:29.298588 1096912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:37:29.306168 1096912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:37:29.313528 1096912 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:37:29.320863 1096912 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:37:29.328899 1096912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:37:29.332655 1096912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:37:29.332771 1096912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:37:29.381804 1096912 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:37:29.389454 1096912 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:37:29.397077 1096912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:37:29.402434 1096912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:37:29.402517 1096912 kubeadm.go:401] StartCluster: {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:37:29.402591 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:37:29.402652 1096912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:37:29.430505 1096912 cri.go:89] found id: ""
	I1208 01:37:29.430621 1096912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:37:29.439503 1096912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:37:29.447866 1096912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:37:29.447972 1096912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:37:29.456567 1096912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:37:29.456591 1096912 kubeadm.go:158] found existing configuration files:
	
	I1208 01:37:29.456664 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:37:29.464462 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:37:29.464532 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:37:29.472286 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:37:29.480274 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:37:29.480366 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:37:29.487900 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:37:29.497200 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:37:29.497289 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:37:29.504758 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:37:29.513213 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:37:29.513301 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:37:29.521588 1096912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:37:29.641183 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:37:29.641717 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:37:29.714280 1096912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:41:34.109607 1096912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:41:34.109645 1096912 kubeadm.go:319] 
	I1208 01:41:34.109713 1096912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:41:34.113996 1096912 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:41:34.114119 1096912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:41:34.114248 1096912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:41:34.114333 1096912 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:41:34.114394 1096912 kubeadm.go:319] OS: Linux
	I1208 01:41:34.114501 1096912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:41:34.114590 1096912 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:41:34.114660 1096912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:41:34.114744 1096912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:41:34.114814 1096912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:41:34.114898 1096912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:41:34.114969 1096912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:41:34.115038 1096912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:41:34.115120 1096912 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:41:34.115216 1096912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:41:34.115356 1096912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:41:34.115492 1096912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:41:34.115594 1096912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:41:34.119636 1096912 out.go:252]   - Generating certificates and keys ...
	I1208 01:41:34.119815 1096912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:41:34.119949 1096912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:41:34.120068 1096912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:41:34.120131 1096912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:41:34.120193 1096912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:41:34.120250 1096912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:41:34.120305 1096912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:41:34.120427 1096912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-536520] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:41:34.120480 1096912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:41:34.120609 1096912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-536520] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1208 01:41:34.120675 1096912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:41:34.120739 1096912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:41:34.120783 1096912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:41:34.120839 1096912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:41:34.120889 1096912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:41:34.120954 1096912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:41:34.121009 1096912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:41:34.121077 1096912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:41:34.121132 1096912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:41:34.121219 1096912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:41:34.121286 1096912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:41:34.124345 1096912 out.go:252]   - Booting up control plane ...
	I1208 01:41:34.124458 1096912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:41:34.124540 1096912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:41:34.124608 1096912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:41:34.124714 1096912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:41:34.124808 1096912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:41:34.124913 1096912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:41:34.125008 1096912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:41:34.125049 1096912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:41:34.125181 1096912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:41:34.125286 1096912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:41:34.125351 1096912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000798763s
	I1208 01:41:34.125355 1096912 kubeadm.go:319] 
	I1208 01:41:34.125412 1096912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:41:34.125445 1096912 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:41:34.125556 1096912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:41:34.125560 1096912 kubeadm.go:319] 
	I1208 01:41:34.125665 1096912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:41:34.125698 1096912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:41:34.125728 1096912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1208 01:41:34.125842 1096912 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-536520] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-536520] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000798763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-536520] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-536520] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000798763s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:41:34.125930 1096912 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 01:41:34.126289 1096912 kubeadm.go:319] 
	I1208 01:41:34.633627 1096912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:41:34.653072 1096912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:41:34.653135 1096912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:41:34.671977 1096912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:41:34.671996 1096912 kubeadm.go:158] found existing configuration files:
	
	I1208 01:41:34.672049 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:41:34.682088 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:41:34.682154 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:41:34.691971 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:41:34.704874 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:41:34.704947 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:41:34.716733 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:41:34.728822 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:41:34.728947 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:41:34.740247 1096912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:41:34.749993 1096912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:41:34.750110 1096912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:41:34.759733 1096912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:41:34.830377 1096912 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:41:34.830565 1096912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:41:34.929677 1096912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:41:34.929822 1096912 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:41:34.929885 1096912 kubeadm.go:319] OS: Linux
	I1208 01:41:34.929977 1096912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:41:34.930068 1096912 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:41:34.930149 1096912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:41:34.930229 1096912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:41:34.930310 1096912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:41:34.930389 1096912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:41:34.930485 1096912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:41:34.930567 1096912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:41:34.930646 1096912 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:41:35.025248 1096912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:41:35.025428 1096912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:41:35.025581 1096912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:41:35.033241 1096912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:41:35.038745 1096912 out.go:252]   - Generating certificates and keys ...
	I1208 01:41:35.038843 1096912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:41:35.038947 1096912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:41:35.039051 1096912 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:41:35.039127 1096912 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:41:35.039203 1096912 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:41:35.039276 1096912 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:41:35.039366 1096912 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:41:35.039431 1096912 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:41:35.039531 1096912 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:41:35.039874 1096912 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:41:35.040303 1096912 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:41:35.040566 1096912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:41:35.149795 1096912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:41:35.377243 1096912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:41:35.573257 1096912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:41:35.819657 1096912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:41:35.945400 1096912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:41:35.946683 1096912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:41:35.949818 1096912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:41:35.953218 1096912 out.go:252]   - Booting up control plane ...
	I1208 01:41:35.953396 1096912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:41:35.953526 1096912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:41:35.954510 1096912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:41:35.978435 1096912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:41:35.978612 1096912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:41:35.987719 1096912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:41:35.987820 1096912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:41:35.987860 1096912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:41:36.227059 1096912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:41:36.227181 1096912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:45:36.224252 1096912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202843s
	I1208 01:45:36.224297 1096912 kubeadm.go:319] 
	I1208 01:45:36.224376 1096912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:45:36.224412 1096912 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:45:36.224526 1096912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:45:36.224533 1096912 kubeadm.go:319] 
	I1208 01:45:36.224650 1096912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:45:36.224695 1096912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:45:36.224737 1096912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:45:36.224744 1096912 kubeadm.go:319] 
	I1208 01:45:36.229514 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:45:36.229948 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:45:36.230084 1096912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:45:36.230325 1096912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:45:36.230339 1096912 kubeadm.go:319] 
	I1208 01:45:36.230417 1096912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:45:36.230499 1096912 kubeadm.go:403] duration metric: took 8m6.827986586s to StartCluster
	I1208 01:45:36.230540 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:45:36.230607 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:45:36.261525 1096912 cri.go:89] found id: ""
	I1208 01:45:36.261550 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.261560 1096912 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:45:36.261567 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:45:36.261627 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:45:36.290275 1096912 cri.go:89] found id: ""
	I1208 01:45:36.290298 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.290307 1096912 logs.go:284] No container was found matching "etcd"
	I1208 01:45:36.290313 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:45:36.290373 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:45:36.317513 1096912 cri.go:89] found id: ""
	I1208 01:45:36.317543 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.317552 1096912 logs.go:284] No container was found matching "coredns"
	I1208 01:45:36.317559 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:45:36.317626 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:45:36.344794 1096912 cri.go:89] found id: ""
	I1208 01:45:36.344818 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.344827 1096912 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:45:36.344834 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:45:36.344896 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:45:36.374203 1096912 cri.go:89] found id: ""
	I1208 01:45:36.374231 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.374239 1096912 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:45:36.374246 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:45:36.374305 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:45:36.400248 1096912 cri.go:89] found id: ""
	I1208 01:45:36.400280 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.400291 1096912 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:45:36.400299 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:45:36.400360 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:45:36.425166 1096912 cri.go:89] found id: ""
	I1208 01:45:36.425190 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.425203 1096912 logs.go:284] No container was found matching "kindnet"
	I1208 01:45:36.425213 1096912 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:45:36.425226 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:45:36.489026 1096912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:45:36.489050 1096912 logs.go:123] Gathering logs for containerd ...
	I1208 01:45:36.489063 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:45:36.530813 1096912 logs.go:123] Gathering logs for container status ...
	I1208 01:45:36.530852 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:45:36.565171 1096912 logs.go:123] Gathering logs for kubelet ...
	I1208 01:45:36.565198 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:45:36.637319 1096912 logs.go:123] Gathering logs for dmesg ...
	I1208 01:45:36.637363 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:45:36.667302 1096912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:45:36.667423 1096912 out.go:285] * 
	* 
	W1208 01:45:36.667679 1096912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.667856 1096912 out.go:285] * 
	* 
	W1208 01:45:36.670553 1096912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:45:36.675751 1096912 out.go:203] 
	W1208 01:45:36.678675 1096912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.678951 1096912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:45:36.679016 1096912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:45:36.683776 1096912 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1097222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:37:08.305644912Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d978b7ec933dfaa3a40373d30ab4c31d838283a17009d633c2f2575fe0d2fa01",
	            "SandboxKey": "/var/run/docker/netns/d978b7ec933d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:a5:95:c9:47:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "169f4c96797fb11af0a5eb9b81855033f528e1a5dc4666f7c0ac0ae34794695b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 6 (338.63911ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:45:37.118358 1125406 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-895688 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ stop    │ -p embed-certs-719683 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:43:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:43:34.815729 1121810 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:34.815855 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.815867 1121810 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:34.815872 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.816138 1121810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:43:34.816580 1121810 out.go:368] Setting JSON to false
	I1208 01:43:34.817458 1121810 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23168,"bootTime":1765135047,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:43:34.817532 1121810 start.go:143] virtualization:  
	I1208 01:43:34.821576 1121810 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:43:34.825841 1121810 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:43:34.825977 1121810 notify.go:221] Checking for updates...
	I1208 01:43:34.832259 1121810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:43:34.835279 1121810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:43:34.841305 1121810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:43:34.844728 1121810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:43:34.847821 1121810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:43:34.851422 1121810 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:34.851605 1121810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:43:34.879485 1121810 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:43:34.879731 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:34.965413 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:34.956183464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:34.965520 1121810 docker.go:319] overlay module found
	I1208 01:43:34.968791 1121810 out.go:179] * Using the docker driver based on user configuration
	I1208 01:43:34.971755 1121810 start.go:309] selected driver: docker
	I1208 01:43:34.971774 1121810 start.go:927] validating driver "docker" against <nil>
	I1208 01:43:34.971788 1121810 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:43:34.972547 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:35.028561 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:35.017550524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:35.028722 1121810 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:43:35.028754 1121810 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:43:35.029019 1121810 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:43:35.032208 1121810 out.go:179] * Using Docker driver with root privileges
	I1208 01:43:35.035049 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:35.035123 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:35.035136 1121810 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:43:35.035233 1121810 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:35.038520 1121810 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:43:35.041429 1121810 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:43:35.044577 1121810 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:43:35.047363 1121810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:43:35.047458 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.047540 1121810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:43:35.047549 1121810 cache.go:65] Caching tarball of preloaded images
	I1208 01:43:35.047628 1121810 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:43:35.047639 1121810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:43:35.047753 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:35.047771 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json: {Name:mk01a58f99ac25ab3f8420cd37e5943e99ab0d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:35.067817 1121810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:43:35.067841 1121810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:43:35.067860 1121810 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:43:35.067891 1121810 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:43:35.068006 1121810 start.go:364] duration metric: took 93.999µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:43:35.068037 1121810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:43:35.068112 1121810 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:43:35.071522 1121810 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:43:35.071830 1121810 start.go:159] libmachine.API.Create for "newest-cni-457779" (driver="docker")
	I1208 01:43:35.071875 1121810 client.go:173] LocalClient.Create starting
	I1208 01:43:35.072009 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:43:35.072049 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072066 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072149 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:43:35.072168 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072180 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072559 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:43:35.089783 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:43:35.089866 1121810 network_create.go:284] running [docker network inspect newest-cni-457779] to gather additional debugging logs...
	I1208 01:43:35.089892 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779
	W1208 01:43:35.107018 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 returned with exit code 1
	I1208 01:43:35.107053 1121810 network_create.go:287] error running [docker network inspect newest-cni-457779]: docker network inspect newest-cni-457779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-457779 not found
	I1208 01:43:35.107080 1121810 network_create.go:289] output of [docker network inspect newest-cni-457779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-457779 not found
	
	** /stderr **
	I1208 01:43:35.107206 1121810 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:35.124993 1121810 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:43:35.125469 1121810 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:43:35.125932 1121810 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:43:35.126507 1121810 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05be0}
	I1208 01:43:35.126532 1121810 network_create.go:124] attempt to create docker network newest-cni-457779 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:43:35.126598 1121810 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-457779 newest-cni-457779
	I1208 01:43:35.185488 1121810 network_create.go:108] docker network newest-cni-457779 192.168.76.0/24 created
	I1208 01:43:35.185521 1121810 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-457779" container
	I1208 01:43:35.185613 1121810 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:43:35.202705 1121810 cli_runner.go:164] Run: docker volume create newest-cni-457779 --label name.minikube.sigs.k8s.io=newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:43:35.222719 1121810 oci.go:103] Successfully created a docker volume newest-cni-457779
	I1208 01:43:35.222808 1121810 cli_runner.go:164] Run: docker run --rm --name newest-cni-457779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --entrypoint /usr/bin/test -v newest-cni-457779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:43:35.763230 1121810 oci.go:107] Successfully prepared a docker volume newest-cni-457779
	I1208 01:43:35.763299 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.763314 1121810 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:43:35.763383 1121810 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:43:39.703637 1121810 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.940199388s)
	I1208 01:43:39.703674 1121810 kic.go:203] duration metric: took 3.940356568s to extract preloaded images to volume ...
	W1208 01:43:39.703822 1121810 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:43:39.703938 1121810 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:43:39.755193 1121810 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-457779 --name newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-457779 --network newest-cni-457779 --ip 192.168.76.2 --volume newest-cni-457779:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:43:40.098178 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Running}}
	I1208 01:43:40.139995 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.170386 1121810 cli_runner.go:164] Run: docker exec newest-cni-457779 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:43:40.229771 1121810 oci.go:144] the created container "newest-cni-457779" has a running status.
	I1208 01:43:40.229802 1121810 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa...
	I1208 01:43:40.697110 1121810 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:43:40.726929 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.749136 1121810 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:43:40.749165 1121810 kic_runner.go:114] Args: [docker exec --privileged newest-cni-457779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:43:40.813461 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.840725 1121810 machine.go:94] provisionDockerMachine start ...
	I1208 01:43:40.840842 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:40.867802 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:40.868157 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:40.868172 1121810 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:43:41.095667 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.095764 1121810 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:43:41.095876 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.120122 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.120469 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.120480 1121810 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:43:41.290623 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.290789 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.311253 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.311570 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.311587 1121810 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:43:41.483218 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:43:41.483251 1121810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:43:41.483283 1121810 ubuntu.go:190] setting up certificates
	I1208 01:43:41.483304 1121810 provision.go:84] configureAuth start
	I1208 01:43:41.483379 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:41.501594 1121810 provision.go:143] copyHostCerts
	I1208 01:43:41.501670 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:43:41.501684 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:43:41.501765 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:43:41.501870 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:43:41.501882 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:43:41.501911 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:43:41.501965 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:43:41.501974 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:43:41.501997 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:43:41.502054 1121810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:43:41.701737 1121810 provision.go:177] copyRemoteCerts
	I1208 01:43:41.701810 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:43:41.701853 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.719228 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:41.826667 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:43:41.845605 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:43:41.864446 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:43:41.883477 1121810 provision.go:87] duration metric: took 400.143683ms to configureAuth
	I1208 01:43:41.883508 1121810 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:43:41.883715 1121810 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:41.883727 1121810 machine.go:97] duration metric: took 1.042983827s to provisionDockerMachine
	I1208 01:43:41.883734 1121810 client.go:176] duration metric: took 6.811847736s to LocalClient.Create
	I1208 01:43:41.883755 1121810 start.go:167] duration metric: took 6.811927679s to libmachine.API.Create "newest-cni-457779"
	I1208 01:43:41.883766 1121810 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:43:41.883777 1121810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:43:41.883842 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:43:41.883884 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.901984 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.009332 1121810 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:43:42.014632 1121810 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:43:42.014671 1121810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:43:42.014684 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:43:42.014745 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:43:42.014838 1121810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:43:42.014945 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:43:42.027153 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:42.048521 1121810 start.go:296] duration metric: took 164.740218ms for postStartSetup
	I1208 01:43:42.048996 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.068188 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:42.068517 1121810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:43:42.068578 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.089135 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.196671 1121810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:43:42.202509 1121810 start.go:128] duration metric: took 7.134380905s to createHost
	I1208 01:43:42.202544 1121810 start.go:83] releasing machines lock for "newest-cni-457779", held for 7.134523987s
	I1208 01:43:42.202651 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.225469 1121810 ssh_runner.go:195] Run: cat /version.json
	I1208 01:43:42.225530 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.225555 1121810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:43:42.225620 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.248192 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.252198 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.354818 1121810 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:42.448479 1121810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:43:42.453374 1121810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:43:42.453474 1121810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:43:42.481420 1121810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:43:42.481454 1121810 start.go:496] detecting cgroup driver to use...
	I1208 01:43:42.481487 1121810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:43:42.481545 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:43:42.497315 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:43:42.510801 1121810 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:43:42.510908 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:43:42.528913 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:43:42.549245 1121810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:43:42.677688 1121810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:43:42.808025 1121810 docker.go:234] disabling docker service ...
	I1208 01:43:42.808134 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:43:42.829668 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:43:42.844784 1121810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:43:42.967423 1121810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:43:43.080509 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:43:43.099271 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:43:43.116361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:43:43.125920 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:43:43.135415 1121810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:43:43.135546 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:43:43.145049 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.154361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:43:43.163282 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.172992 1121810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:43:43.183456 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:43:43.192700 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:43:43.201918 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:43:43.211033 1121810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:43:43.218575 1121810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:43:43.226217 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.336148 1121810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:43:43.485946 1121810 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:43:43.486057 1121810 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:43:43.490114 1121810 start.go:564] Will wait 60s for crictl version
	I1208 01:43:43.490229 1121810 ssh_runner.go:195] Run: which crictl
	I1208 01:43:43.494026 1121810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:43:43.518236 1121810 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:43:43.518359 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.546503 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.572460 1121810 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:43:43.575475 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:43.591594 1121810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:43:43.595521 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.608351 1121810 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:43:43.611336 1121810 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:43:43.611494 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:43.611589 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.637041 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.637067 1121810 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:43:43.637131 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.663968 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.663994 1121810 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:43:43.664003 1121810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:43:43.664106 1121810 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:43:43.664184 1121810 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:43:43.690508 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:43.690535 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:43.690554 1121810 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:43:43.690578 1121810 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:43:43.690708 1121810 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:43:43.690785 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:43:43.698933 1121810 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:43:43.699056 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:43:43.707017 1121810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:43:43.720830 1121810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:43:43.734819 1121810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:43:43.748443 1121810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:43:43.752534 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.763093 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.892382 1121810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:43:43.909690 1121810 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:43:43.909718 1121810 certs.go:195] generating shared ca certs ...
	I1208 01:43:43.909736 1121810 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:43.909947 1121810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:43:43.910028 1121810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:43:43.910042 1121810 certs.go:257] generating profile certs ...
	I1208 01:43:43.910113 1121810 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:43:43.910132 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt with IP's: []
	I1208 01:43:44.271233 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt ...
	I1208 01:43:44.271267 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt: {Name:mka7ec1a9b348db295896c4fbe93c78f0eac2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271468 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key ...
	I1208 01:43:44.271482 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key: {Name:mkc310f4a570315e10c49516c56b2513b55aa651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271582 1121810 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:43:44.271600 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:43:44.830639 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 ...
	I1208 01:43:44.830674 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399: {Name:mk4dcde78303e922dc6fd9b0f86bb4a694f9ca60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830866 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 ...
	I1208 01:43:44.830881 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399: {Name:mk518276ee5546392f5eb2700a48869cb6431589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830967 1121810 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt
	I1208 01:43:44.831050 1121810 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key
	I1208 01:43:44.831119 1121810 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:43:44.831137 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt with IP's: []
	I1208 01:43:44.882804 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt ...
	I1208 01:43:44.882832 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt: {Name:mk7d1a29564431efb40b45d0c303e991b7f53000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883011 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key ...
	I1208 01:43:44.883025 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key: {Name:mk7882d520d12c1dd539975ac85c206b173a5dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883213 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:43:44.883262 1121810 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:43:44.883275 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:43:44.883314 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:43:44.883347 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:43:44.883379 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:43:44.883428 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:44.884017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:43:44.902791 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:43:44.921385 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:43:44.941017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:43:44.959449 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:43:44.976974 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:43:44.995099 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:43:45.050850 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:43:45.098831 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:43:45.141186 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:43:45.192133 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:43:45.239267 1121810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:43:45.262695 1121810 ssh_runner.go:195] Run: openssl version
	I1208 01:43:45.278120 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.292034 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:43:45.303687 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308166 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308244 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.359063 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.376429 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.385683 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.402964 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:43:45.419573 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426601 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426673 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.470891 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:43:45.478953 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:43:45.487067 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.495033 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:43:45.505252 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509187 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509254 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.550865 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:43:45.558694 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:43:45.566191 1121810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:43:45.569801 1121810 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:43:45.569857 1121810 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:45.569950 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:43:45.570007 1121810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:43:45.598900 1121810 cri.go:89] found id: ""
	I1208 01:43:45.598976 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:43:45.606734 1121810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:43:45.614639 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:43:45.614740 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:43:45.622525 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:43:45.622586 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:43:45.622667 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:43:45.630915 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:43:45.631002 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:43:45.638607 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:43:45.646509 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:43:45.646578 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:43:45.654866 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.663136 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:43:45.663229 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.670925 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:43:45.679164 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:43:45.679233 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:43:45.686793 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:43:45.726191 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:43:45.726671 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:43:45.801402 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:43:45.801539 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:43:45.801611 1121810 kubeadm.go:319] OS: Linux
	I1208 01:43:45.801690 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:43:45.801780 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:43:45.801861 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:43:45.801944 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:43:45.802035 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:43:45.802107 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:43:45.802179 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:43:45.802261 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:43:45.802332 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:43:45.874152 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:43:45.874329 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:43:45.874481 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:43:45.879465 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:43:45.885605 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:43:45.885775 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:43:45.885880 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:43:46.228184 1121810 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:43:46.789407 1121810 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:43:46.965778 1121810 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:43:47.194652 1121810 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:43:47.706685 1121810 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:43:47.707058 1121810 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:47.801474 1121810 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:43:47.801936 1121810 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:48.142552 1121810 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:43:48.263003 1121810 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:43:48.445660 1121810 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:43:48.445984 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:43:48.591329 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:43:49.028618 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:43:49.379863 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:43:49.569393 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:43:50.065560 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:43:50.066253 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:43:50.069265 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:43:50.072991 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:43:50.073097 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:43:50.073174 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:43:50.073707 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:43:50.092183 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:43:50.092359 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:43:50.100854 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:43:50.101429 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:43:50.101728 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:43:50.234928 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:43:50.235054 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:45:36.224252 1096912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202843s
	I1208 01:45:36.224297 1096912 kubeadm.go:319] 
	I1208 01:45:36.224376 1096912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:45:36.224412 1096912 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:45:36.224526 1096912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:45:36.224533 1096912 kubeadm.go:319] 
	I1208 01:45:36.224650 1096912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:45:36.224695 1096912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:45:36.224737 1096912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:45:36.224744 1096912 kubeadm.go:319] 
	I1208 01:45:36.229514 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:45:36.229948 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:45:36.230084 1096912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:45:36.230325 1096912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:45:36.230339 1096912 kubeadm.go:319] 
	I1208 01:45:36.230417 1096912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:45:36.230499 1096912 kubeadm.go:403] duration metric: took 8m6.827986586s to StartCluster
	I1208 01:45:36.230540 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:45:36.230607 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:45:36.261525 1096912 cri.go:89] found id: ""
	I1208 01:45:36.261550 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.261560 1096912 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:45:36.261567 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:45:36.261627 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:45:36.290275 1096912 cri.go:89] found id: ""
	I1208 01:45:36.290298 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.290307 1096912 logs.go:284] No container was found matching "etcd"
	I1208 01:45:36.290313 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:45:36.290373 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:45:36.317513 1096912 cri.go:89] found id: ""
	I1208 01:45:36.317543 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.317552 1096912 logs.go:284] No container was found matching "coredns"
	I1208 01:45:36.317559 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:45:36.317626 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:45:36.344794 1096912 cri.go:89] found id: ""
	I1208 01:45:36.344818 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.344827 1096912 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:45:36.344834 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:45:36.344896 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:45:36.374203 1096912 cri.go:89] found id: ""
	I1208 01:45:36.374231 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.374239 1096912 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:45:36.374246 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:45:36.374305 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:45:36.400248 1096912 cri.go:89] found id: ""
	I1208 01:45:36.400280 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.400291 1096912 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:45:36.400299 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:45:36.400360 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:45:36.425166 1096912 cri.go:89] found id: ""
	I1208 01:45:36.425190 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.425203 1096912 logs.go:284] No container was found matching "kindnet"
	I1208 01:45:36.425213 1096912 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:45:36.425226 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:45:36.489026 1096912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:45:36.489050 1096912 logs.go:123] Gathering logs for containerd ...
	I1208 01:45:36.489063 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:45:36.530813 1096912 logs.go:123] Gathering logs for container status ...
	I1208 01:45:36.530852 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:45:36.565171 1096912 logs.go:123] Gathering logs for kubelet ...
	I1208 01:45:36.565198 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:45:36.637319 1096912 logs.go:123] Gathering logs for dmesg ...
	I1208 01:45:36.637363 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:45:36.667302 1096912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:45:36.667423 1096912 out.go:285] * 
	W1208 01:45:36.667679 1096912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.667856 1096912 out.go:285] * 
	W1208 01:45:36.670553 1096912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:45:36.675751 1096912 out.go:203] 
	W1208 01:45:36.678675 1096912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.678951 1096912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:45:36.679016 1096912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:45:36.683776 1096912 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:37:18 no-preload-536520 containerd[758]: time="2025-12-08T01:37:18.211674218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.246725505Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.249013236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267094705Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267745122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.258971098Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.261089965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.269499475Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.270600866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.761254121Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.764004657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.776242422Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.786248661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.366835429Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.370354521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.377378157Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.378171320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.504878516Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.507125147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.516478294Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.517412037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.869447727Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.871885080Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.880678132Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.881176860Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:37.758670    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:37.759484    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:37.761116    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:37.761438    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:37.762929    5579 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:45:37 up  6:28,  0 user,  load average: 0.53, 1.71, 2.08
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:45:34 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 08 01:45:35 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:35 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:35 no-preload-536520 kubelet[5384]: E1208 01:45:35.159240    5384 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 08 01:45:35 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:35 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:35 no-preload-536520 kubelet[5389]: E1208 01:45:35.895446    5389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:36 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 01:45:36 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:36 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:36 no-preload-536520 kubelet[5471]: E1208 01:45:36.729128    5471 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:36 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:36 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 01:45:37 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:37 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:37 no-preload-536520 kubelet[5497]: E1208 01:45:37.418426    5497 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 6 (371.016962ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:45:38.239145 1125625 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (511.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (499.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1208 01:44:08.104976  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:44:25.030935  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:44:30.128071  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:44:56.396994  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m18.254999058s)

                                                
                                                
-- stdout --
	* [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:43:34.815729 1121810 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:34.815855 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.815867 1121810 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:34.815872 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.816138 1121810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:43:34.816580 1121810 out.go:368] Setting JSON to false
	I1208 01:43:34.817458 1121810 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23168,"bootTime":1765135047,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:43:34.817532 1121810 start.go:143] virtualization:  
	I1208 01:43:34.821576 1121810 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:43:34.825841 1121810 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:43:34.825977 1121810 notify.go:221] Checking for updates...
	I1208 01:43:34.832259 1121810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:43:34.835279 1121810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:43:34.841305 1121810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:43:34.844728 1121810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:43:34.847821 1121810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:43:34.851422 1121810 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:34.851605 1121810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:43:34.879485 1121810 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:43:34.879731 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:34.965413 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:34.956183464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:34.965520 1121810 docker.go:319] overlay module found
	I1208 01:43:34.968791 1121810 out.go:179] * Using the docker driver based on user configuration
	I1208 01:43:34.971755 1121810 start.go:309] selected driver: docker
	I1208 01:43:34.971774 1121810 start.go:927] validating driver "docker" against <nil>
	I1208 01:43:34.971788 1121810 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:43:34.972547 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:35.028561 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:35.017550524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:35.028722 1121810 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:43:35.028754 1121810 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:43:35.029019 1121810 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:43:35.032208 1121810 out.go:179] * Using Docker driver with root privileges
	I1208 01:43:35.035049 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:35.035123 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:35.035136 1121810 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:43:35.035233 1121810 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:35.038520 1121810 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:43:35.041429 1121810 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:43:35.044577 1121810 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:43:35.047363 1121810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:43:35.047458 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.047540 1121810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:43:35.047549 1121810 cache.go:65] Caching tarball of preloaded images
	I1208 01:43:35.047628 1121810 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:43:35.047639 1121810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:43:35.047753 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:35.047771 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json: {Name:mk01a58f99ac25ab3f8420cd37e5943e99ab0d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:35.067817 1121810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:43:35.067841 1121810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:43:35.067860 1121810 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:43:35.067891 1121810 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:43:35.068006 1121810 start.go:364] duration metric: took 93.999µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:43:35.068037 1121810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:43:35.068112 1121810 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:43:35.071522 1121810 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:43:35.071830 1121810 start.go:159] libmachine.API.Create for "newest-cni-457779" (driver="docker")
	I1208 01:43:35.071875 1121810 client.go:173] LocalClient.Create starting
	I1208 01:43:35.072009 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:43:35.072049 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072066 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072149 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:43:35.072168 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072180 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072559 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:43:35.089783 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:43:35.089866 1121810 network_create.go:284] running [docker network inspect newest-cni-457779] to gather additional debugging logs...
	I1208 01:43:35.089892 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779
	W1208 01:43:35.107018 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 returned with exit code 1
	I1208 01:43:35.107053 1121810 network_create.go:287] error running [docker network inspect newest-cni-457779]: docker network inspect newest-cni-457779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-457779 not found
	I1208 01:43:35.107080 1121810 network_create.go:289] output of [docker network inspect newest-cni-457779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-457779 not found
	
	** /stderr **
	I1208 01:43:35.107206 1121810 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:35.124993 1121810 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:43:35.125469 1121810 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:43:35.125932 1121810 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:43:35.126507 1121810 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05be0}
	I1208 01:43:35.126532 1121810 network_create.go:124] attempt to create docker network newest-cni-457779 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:43:35.126598 1121810 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-457779 newest-cni-457779
	I1208 01:43:35.185488 1121810 network_create.go:108] docker network newest-cni-457779 192.168.76.0/24 created
	I1208 01:43:35.185521 1121810 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-457779" container
	I1208 01:43:35.185613 1121810 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:43:35.202705 1121810 cli_runner.go:164] Run: docker volume create newest-cni-457779 --label name.minikube.sigs.k8s.io=newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:43:35.222719 1121810 oci.go:103] Successfully created a docker volume newest-cni-457779
	I1208 01:43:35.222808 1121810 cli_runner.go:164] Run: docker run --rm --name newest-cni-457779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --entrypoint /usr/bin/test -v newest-cni-457779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:43:35.763230 1121810 oci.go:107] Successfully prepared a docker volume newest-cni-457779
	I1208 01:43:35.763299 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.763314 1121810 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:43:35.763383 1121810 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:43:39.703637 1121810 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.940199388s)
	I1208 01:43:39.703674 1121810 kic.go:203] duration metric: took 3.940356568s to extract preloaded images to volume ...
	W1208 01:43:39.703822 1121810 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:43:39.703938 1121810 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:43:39.755193 1121810 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-457779 --name newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-457779 --network newest-cni-457779 --ip 192.168.76.2 --volume newest-cni-457779:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:43:40.098178 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Running}}
	I1208 01:43:40.139995 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.170386 1121810 cli_runner.go:164] Run: docker exec newest-cni-457779 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:43:40.229771 1121810 oci.go:144] the created container "newest-cni-457779" has a running status.
	I1208 01:43:40.229802 1121810 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa...
	I1208 01:43:40.697110 1121810 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:43:40.726929 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.749136 1121810 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:43:40.749165 1121810 kic_runner.go:114] Args: [docker exec --privileged newest-cni-457779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:43:40.813461 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.840725 1121810 machine.go:94] provisionDockerMachine start ...
	I1208 01:43:40.840842 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:40.867802 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:40.868157 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:40.868172 1121810 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:43:41.095667 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.095764 1121810 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:43:41.095876 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.120122 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.120469 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.120480 1121810 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:43:41.290623 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.290789 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.311253 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.311570 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.311587 1121810 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:43:41.483218 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:43:41.483251 1121810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:43:41.483283 1121810 ubuntu.go:190] setting up certificates
	I1208 01:43:41.483304 1121810 provision.go:84] configureAuth start
	I1208 01:43:41.483379 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:41.501594 1121810 provision.go:143] copyHostCerts
	I1208 01:43:41.501670 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:43:41.501684 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:43:41.501765 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:43:41.501870 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:43:41.501882 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:43:41.501911 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:43:41.501965 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:43:41.501974 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:43:41.501997 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:43:41.502054 1121810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:43:41.701737 1121810 provision.go:177] copyRemoteCerts
	I1208 01:43:41.701810 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:43:41.701853 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.719228 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:41.826667 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:43:41.845605 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:43:41.864446 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:43:41.883477 1121810 provision.go:87] duration metric: took 400.143683ms to configureAuth
	I1208 01:43:41.883508 1121810 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:43:41.883715 1121810 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:41.883727 1121810 machine.go:97] duration metric: took 1.042983827s to provisionDockerMachine
	I1208 01:43:41.883734 1121810 client.go:176] duration metric: took 6.811847736s to LocalClient.Create
	I1208 01:43:41.883755 1121810 start.go:167] duration metric: took 6.811927679s to libmachine.API.Create "newest-cni-457779"
	I1208 01:43:41.883766 1121810 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:43:41.883777 1121810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:43:41.883842 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:43:41.883884 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.901984 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.009332 1121810 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:43:42.014632 1121810 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:43:42.014671 1121810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:43:42.014684 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:43:42.014745 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:43:42.014838 1121810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:43:42.014945 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:43:42.027153 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:42.048521 1121810 start.go:296] duration metric: took 164.740218ms for postStartSetup
	I1208 01:43:42.048996 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.068188 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:42.068517 1121810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:43:42.068578 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.089135 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.196671 1121810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:43:42.202509 1121810 start.go:128] duration metric: took 7.134380905s to createHost
	I1208 01:43:42.202544 1121810 start.go:83] releasing machines lock for "newest-cni-457779", held for 7.134523987s
	I1208 01:43:42.202651 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.225469 1121810 ssh_runner.go:195] Run: cat /version.json
	I1208 01:43:42.225530 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.225555 1121810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:43:42.225620 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.248192 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.252198 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.354818 1121810 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:42.448479 1121810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:43:42.453374 1121810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:43:42.453474 1121810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:43:42.481420 1121810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:43:42.481454 1121810 start.go:496] detecting cgroup driver to use...
	I1208 01:43:42.481487 1121810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:43:42.481545 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:43:42.497315 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:43:42.510801 1121810 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:43:42.510908 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:43:42.528913 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:43:42.549245 1121810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:43:42.677688 1121810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:43:42.808025 1121810 docker.go:234] disabling docker service ...
	I1208 01:43:42.808134 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:43:42.829668 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:43:42.844784 1121810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:43:42.967423 1121810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:43:43.080509 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:43:43.099271 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:43:43.116361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:43:43.125920 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:43:43.135415 1121810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:43:43.135546 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:43:43.145049 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.154361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:43:43.163282 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.172992 1121810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:43:43.183456 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:43:43.192700 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:43:43.201918 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:43:43.211033 1121810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:43:43.218575 1121810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:43:43.226217 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.336148 1121810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:43:43.485946 1121810 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:43:43.486057 1121810 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:43:43.490114 1121810 start.go:564] Will wait 60s for crictl version
	I1208 01:43:43.490229 1121810 ssh_runner.go:195] Run: which crictl
	I1208 01:43:43.494026 1121810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:43:43.518236 1121810 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:43:43.518359 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.546503 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.572460 1121810 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:43:43.575475 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:43.591594 1121810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:43:43.595521 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.608351 1121810 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:43:43.611336 1121810 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:43:43.611494 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:43.611589 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.637041 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.637067 1121810 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:43:43.637131 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.663968 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.663994 1121810 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:43:43.664003 1121810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:43:43.664106 1121810 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:43:43.664184 1121810 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:43:43.690508 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:43.690535 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:43.690554 1121810 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:43:43.690578 1121810 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:43:43.690708 1121810 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:43:43.690785 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:43:43.698933 1121810 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:43:43.699056 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:43:43.707017 1121810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:43:43.720830 1121810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:43:43.734819 1121810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:43:43.748443 1121810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:43:43.752534 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.763093 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.892382 1121810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:43:43.909690 1121810 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:43:43.909718 1121810 certs.go:195] generating shared ca certs ...
	I1208 01:43:43.909736 1121810 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:43.909947 1121810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:43:43.910028 1121810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:43:43.910042 1121810 certs.go:257] generating profile certs ...
	I1208 01:43:43.910113 1121810 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:43:43.910132 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt with IP's: []
	I1208 01:43:44.271233 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt ...
	I1208 01:43:44.271267 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt: {Name:mka7ec1a9b348db295896c4fbe93c78f0eac2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271468 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key ...
	I1208 01:43:44.271482 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key: {Name:mkc310f4a570315e10c49516c56b2513b55aa651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271582 1121810 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:43:44.271600 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:43:44.830639 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 ...
	I1208 01:43:44.830674 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399: {Name:mk4dcde78303e922dc6fd9b0f86bb4a694f9ca60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830866 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 ...
	I1208 01:43:44.830881 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399: {Name:mk518276ee5546392f5eb2700a48869cb6431589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830967 1121810 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt
	I1208 01:43:44.831050 1121810 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key
	I1208 01:43:44.831119 1121810 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:43:44.831137 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt with IP's: []
	I1208 01:43:44.882804 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt ...
	I1208 01:43:44.882832 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt: {Name:mk7d1a29564431efb40b45d0c303e991b7f53000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883011 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key ...
	I1208 01:43:44.883025 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key: {Name:mk7882d520d12c1dd539975ac85c206b173a5dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883213 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:43:44.883262 1121810 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:43:44.883275 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:43:44.883314 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:43:44.883347 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:43:44.883379 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:43:44.883428 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:44.884017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:43:44.902791 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:43:44.921385 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:43:44.941017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:43:44.959449 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:43:44.976974 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:43:44.995099 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:43:45.050850 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:43:45.098831 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:43:45.141186 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:43:45.192133 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:43:45.239267 1121810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:43:45.262695 1121810 ssh_runner.go:195] Run: openssl version
	I1208 01:43:45.278120 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.292034 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:43:45.303687 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308166 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308244 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.359063 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.376429 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.385683 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.402964 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:43:45.419573 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426601 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426673 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.470891 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:43:45.478953 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:43:45.487067 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.495033 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:43:45.505252 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509187 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509254 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.550865 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:43:45.558694 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:43:45.566191 1121810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:43:45.569801 1121810 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:43:45.569857 1121810 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:45.569950 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:43:45.570007 1121810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:43:45.598900 1121810 cri.go:89] found id: ""
	I1208 01:43:45.598976 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:43:45.606734 1121810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:43:45.614639 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:43:45.614740 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:43:45.622525 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:43:45.622586 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:43:45.622667 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:43:45.630915 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:43:45.631002 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:43:45.638607 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:43:45.646509 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:43:45.646578 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:43:45.654866 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.663136 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:43:45.663229 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.670925 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:43:45.679164 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:43:45.679233 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:43:45.686793 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:43:45.726191 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:43:45.726671 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:43:45.801402 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:43:45.801539 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:43:45.801611 1121810 kubeadm.go:319] OS: Linux
	I1208 01:43:45.801690 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:43:45.801780 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:43:45.801861 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:43:45.801944 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:43:45.802035 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:43:45.802107 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:43:45.802179 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:43:45.802261 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:43:45.802332 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:43:45.874152 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:43:45.874329 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:43:45.874481 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:43:45.879465 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:43:45.885605 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:43:45.885775 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:43:45.885880 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:43:46.228184 1121810 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:43:46.789407 1121810 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:43:46.965778 1121810 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:43:47.194652 1121810 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:43:47.706685 1121810 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:43:47.707058 1121810 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:47.801474 1121810 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:43:47.801936 1121810 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:48.142552 1121810 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:43:48.263003 1121810 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:43:48.445660 1121810 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:43:48.445984 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:43:48.591329 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:43:49.028618 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:43:49.379863 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:43:49.569393 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:43:50.065560 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:43:50.066253 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:43:50.069265 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:43:50.072991 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:43:50.073097 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:43:50.073174 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:43:50.073707 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:43:50.092183 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:43:50.092359 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:43:50.100854 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:43:50.101429 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:43:50.101728 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:43:50.234928 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:43:50.235054 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:47:50.240219 1121810 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005281202s
	I1208 01:47:50.240251 1121810 kubeadm.go:319] 
	I1208 01:47:50.240305 1121810 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:47:50.240337 1121810 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:47:50.240436 1121810 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:47:50.240441 1121810 kubeadm.go:319] 
	I1208 01:47:50.240540 1121810 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:47:50.240570 1121810 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:47:50.240599 1121810 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:47:50.240604 1121810 kubeadm.go:319] 
	I1208 01:47:50.244623 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:47:50.245144 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:47:50.245269 1121810 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:47:50.245523 1121810 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:47:50.245536 1121810 kubeadm.go:319] 
	I1208 01:47:50.245606 1121810 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:47:50.245750 1121810 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005281202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005281202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:47:50.245846 1121810 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 01:47:50.662033 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:47:50.675434 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:47:50.675505 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:47:50.683543 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:47:50.683562 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:47:50.683614 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:47:50.691591 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:47:50.691654 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:47:50.699350 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:47:50.707376 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:47:50.707443 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:47:50.715135 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:47:50.723172 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:47:50.723260 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:47:50.731275 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:47:50.739267 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:47:50.739332 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:47:50.747081 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:47:50.787448 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:47:50.787678 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:47:50.866311 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:47:50.866389 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:47:50.866433 1121810 kubeadm.go:319] OS: Linux
	I1208 01:47:50.866508 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:47:50.866562 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:47:50.866613 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:47:50.866667 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:47:50.866720 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:47:50.866772 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:47:50.866821 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:47:50.866871 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:47:50.866921 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:47:50.933039 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:47:50.933171 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:47:50.933266 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:47:50.942872 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:47:50.948105 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:47:50.948200 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:47:50.948265 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:47:50.948342 1121810 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:47:50.948403 1121810 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:47:50.948473 1121810 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:47:50.948531 1121810 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:47:50.948594 1121810 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:47:50.948661 1121810 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:47:50.948735 1121810 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:47:50.948807 1121810 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:47:50.948845 1121810 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:47:50.948901 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:47:51.112853 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:47:51.634368 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:47:51.809543 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:47:52.224203 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:47:52.422413 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:47:52.423095 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:47:52.425801 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:47:52.429055 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:47:52.429171 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:47:52.429263 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:47:52.429335 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:47:52.449800 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:47:52.449912 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:47:52.458225 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:47:52.458717 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:47:52.458770 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:47:52.594110 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:47:52.594228 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:51:52.594163 1121810 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000318083s
	I1208 01:51:52.594189 1121810 kubeadm.go:319] 
	I1208 01:51:52.594247 1121810 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:51:52.594280 1121810 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:51:52.594385 1121810 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:51:52.594389 1121810 kubeadm.go:319] 
	I1208 01:51:52.594514 1121810 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:51:52.594548 1121810 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:51:52.594578 1121810 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:51:52.594582 1121810 kubeadm.go:319] 
	I1208 01:51:52.598647 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:51:52.599081 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:51:52.599190 1121810 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:51:52.599423 1121810 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:51:52.599429 1121810 kubeadm.go:319] 
	I1208 01:51:52.599545 1121810 kubeadm.go:403] duration metric: took 8m7.029694705s to StartCluster
	I1208 01:51:52.599580 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:51:52.599643 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:51:52.599710 1121810 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:51:52.631244 1121810 cri.go:89] found id: ""
	I1208 01:51:52.631271 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.631280 1121810 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:51:52.631288 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:51:52.631353 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:51:52.658412 1121810 cri.go:89] found id: ""
	I1208 01:51:52.658492 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.658502 1121810 logs.go:284] No container was found matching "etcd"
	I1208 01:51:52.658519 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:51:52.658610 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:51:52.685789 1121810 cri.go:89] found id: ""
	I1208 01:51:52.685814 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.685823 1121810 logs.go:284] No container was found matching "coredns"
	I1208 01:51:52.685829 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:51:52.685887 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:51:52.713200 1121810 cri.go:89] found id: ""
	I1208 01:51:52.713226 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.713235 1121810 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:51:52.713241 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:51:52.713299 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:51:52.737730 1121810 cri.go:89] found id: ""
	I1208 01:51:52.737756 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.737765 1121810 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:51:52.737771 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:51:52.737829 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:51:52.763894 1121810 cri.go:89] found id: ""
	I1208 01:51:52.763928 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.763937 1121810 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:51:52.763944 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:51:52.764012 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:51:52.788698 1121810 cri.go:89] found id: ""
	I1208 01:51:52.788762 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.788777 1121810 logs.go:284] No container was found matching "kindnet"
	I1208 01:51:52.788788 1121810 logs.go:123] Gathering logs for kubelet ...
	I1208 01:51:52.788799 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:51:52.846920 1121810 logs.go:123] Gathering logs for dmesg ...
	I1208 01:51:52.846956 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:51:52.862048 1121810 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:51:52.862076 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:51:52.931760 1121810 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:51:52.923107    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.923875    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.925579    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.926258    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.927897    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:51:52.923107    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.923875    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.925579    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.926258    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.927897    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:51:52.931799 1121810 logs.go:123] Gathering logs for containerd ...
	I1208 01:51:52.931812 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:51:52.970819 1121810 logs.go:123] Gathering logs for container status ...
	I1208 01:51:52.970855 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1208 01:51:53.000445 1121810 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:51:53.000503 1121810 out.go:285] * 
	* 
	W1208 01:51:53.000560 1121810 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:51:53.000578 1121810 out.go:285] * 
	* 
	W1208 01:51:53.002833 1121810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:51:53.009552 1121810 out.go:203] 
	W1208 01:51:53.012504 1121810 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:51:53.012580 1121810 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:51:53.012606 1121810 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:51:53.015855 1121810 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-457779
helpers_test.go:243: (dbg) docker inspect newest-cni-457779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	        "Created": "2025-12-08T01:43:39.768991386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1122247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:43:39.838290223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hostname",
	        "HostsPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hosts",
	        "LogPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515-json.log",
	        "Name": "/newest-cni-457779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-457779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-457779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	                "LowerDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-457779",
	                "Source": "/var/lib/docker/volumes/newest-cni-457779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-457779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-457779",
	                "name.minikube.sigs.k8s.io": "newest-cni-457779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7e1afe08172b5d6e1b59898e41a0a10f530b283274a009e928ed8f8bd2ac007",
	            "SandboxKey": "/var/run/docker/netns/b7e1afe08172",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-457779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:c8:ef:fa:a0:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e759035a3431798f7b6fae1fcd872afa7240c356fb1da4c53589714768a6edc3",
	                    "EndpointID": "2d01411269374733ce9c99388d7ff970811ced41e065bd82a2eb4412dd772c8f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-457779",
	                        "638bfd2d42fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779: exit status 6 (364.877385ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:51:53.460869 1134247 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-719683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ stop    │ -p embed-certs-719683 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p no-preload-536520 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ start   │ -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:47:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:47:25.144534 1128548 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:47:25.144678 1128548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:47:25.144689 1128548 out.go:374] Setting ErrFile to fd 2...
	I1208 01:47:25.144694 1128548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:47:25.144937 1128548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:47:25.145300 1128548 out.go:368] Setting JSON to false
	I1208 01:47:25.146183 1128548 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23398,"bootTime":1765135047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:47:25.146250 1128548 start.go:143] virtualization:  
	I1208 01:47:25.149287 1128548 out.go:179] * [no-preload-536520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:47:25.152995 1128548 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:47:25.153086 1128548 notify.go:221] Checking for updates...
	I1208 01:47:25.159060 1128548 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:47:25.162001 1128548 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:25.164903 1128548 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:47:25.167734 1128548 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:47:25.170597 1128548 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:47:25.173940 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:25.174644 1128548 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:47:25.196090 1128548 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:47:25.196210 1128548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:47:25.265905 1128548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:47:25.255621287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:47:25.266017 1128548 docker.go:319] overlay module found
	I1208 01:47:25.271179 1128548 out.go:179] * Using the docker driver based on existing profile
	I1208 01:47:25.274140 1128548 start.go:309] selected driver: docker
	I1208 01:47:25.274161 1128548 start.go:927] validating driver "docker" against &{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:25.274269 1128548 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:47:25.275138 1128548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:47:25.329239 1128548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:47:25.319858232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:47:25.329577 1128548 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:47:25.329613 1128548 cni.go:84] Creating CNI manager for ""
	I1208 01:47:25.329682 1128548 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:47:25.329727 1128548 start.go:353] cluster config:
	{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:25.332831 1128548 out.go:179] * Starting "no-preload-536520" primary control-plane node in "no-preload-536520" cluster
	I1208 01:47:25.335606 1128548 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:47:25.338488 1128548 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:47:25.341508 1128548 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:47:25.341637 1128548 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:47:25.341965 1128548 cache.go:107] acquiring lock: {Name:mk26e7e88ac6993c5141f2d02121dfa2fc547fd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342041 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:47:25.342050 1128548 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.869µs
	I1208 01:47:25.342063 1128548 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:47:25.342075 1128548 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:47:25.342168 1128548 cache.go:107] acquiring lock: {Name:mk0f1b4d6e089d68a7c2b058d311e225652853b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342209 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:47:25.342215 1128548 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 51.192µs
	I1208 01:47:25.342221 1128548 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342231 1128548 cache.go:107] acquiring lock: {Name:mk597bd9b4cd05f2d1a0093859d8b23b8ea1cd1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342263 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:47:25.342268 1128548 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.556µs
	I1208 01:47:25.342274 1128548 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342284 1128548 cache.go:107] acquiring lock: {Name:mka22e7ada81429241ca2443bce21a3f31b8eb66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342310 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:47:25.342315 1128548 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.83µs
	I1208 01:47:25.342323 1128548 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342334 1128548 cache.go:107] acquiring lock: {Name:mkfea4ee3c261ad6c1d7efee63fc672216a4c310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342372 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:47:25.342381 1128548 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.246µs
	I1208 01:47:25.342387 1128548 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342398 1128548 cache.go:107] acquiring lock: {Name:mk8813c8ba18f703b4246d4ffd8656e53b0f2ec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342424 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:47:25.342429 1128548 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 32µs
	I1208 01:47:25.342434 1128548 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:47:25.342566 1128548 cache.go:107] acquiring lock: {Name:mk98329aaba04bc9ea4839996e52989df0918014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342601 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:47:25.342606 1128548 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 163.997µs
	I1208 01:47:25.342654 1128548 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:47:25.342670 1128548 cache.go:107] acquiring lock: {Name:mk58db1a89606bc77924fd68a726167dcd840a38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342704 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:47:25.342709 1128548 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 41.297µs
	I1208 01:47:25.342715 1128548 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:47:25.342736 1128548 cache.go:87] Successfully saved all images to host disk.
	I1208 01:47:25.366899 1128548 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:47:25.366925 1128548 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:47:25.366941 1128548 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:47:25.366970 1128548 start.go:360] acquireMachinesLock for no-preload-536520: {Name:mkcfe59c9f9ccdd77be288a5dfb4e3b57f6ad839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.367026 1128548 start.go:364] duration metric: took 36.948µs to acquireMachinesLock for "no-preload-536520"
	I1208 01:47:25.367050 1128548 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:47:25.367061 1128548 fix.go:54] fixHost starting: 
	I1208 01:47:25.367326 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:25.386712 1128548 fix.go:112] recreateIfNeeded on no-preload-536520: state=Stopped err=<nil>
	W1208 01:47:25.386739 1128548 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:47:25.390105 1128548 out.go:252] * Restarting existing docker container for "no-preload-536520" ...
	I1208 01:47:25.390183 1128548 cli_runner.go:164] Run: docker start no-preload-536520
	I1208 01:47:25.647203 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:25.672717 1128548 kic.go:430] container "no-preload-536520" state is running.
	I1208 01:47:25.673659 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:25.696605 1128548 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:47:25.696850 1128548 machine.go:94] provisionDockerMachine start ...
	I1208 01:47:25.696917 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:25.719881 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:25.720251 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:25.720267 1128548 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:47:25.721006 1128548 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:47:28.871078 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:47:28.871100 1128548 ubuntu.go:182] provisioning hostname "no-preload-536520"
	I1208 01:47:28.871170 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:28.890322 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:28.890666 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:28.890685 1128548 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-536520 && echo "no-preload-536520" | sudo tee /etc/hostname
	I1208 01:47:29.051907 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:47:29.051990 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.071065 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:29.071389 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:29.071409 1128548 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-536520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-536520/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-536520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:47:29.230899 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:47:29.230991 1128548 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:47:29.231044 1128548 ubuntu.go:190] setting up certificates
	I1208 01:47:29.231075 1128548 provision.go:84] configureAuth start
	I1208 01:47:29.231167 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:29.253696 1128548 provision.go:143] copyHostCerts
	I1208 01:47:29.253770 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:47:29.253785 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:47:29.253863 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:47:29.253977 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:47:29.253988 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:47:29.254015 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:47:29.254069 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:47:29.254079 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:47:29.254103 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:47:29.254158 1128548 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.no-preload-536520 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-536520]
	I1208 01:47:29.311134 1128548 provision.go:177] copyRemoteCerts
	I1208 01:47:29.311266 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:47:29.311345 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.328954 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.438688 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:47:29.457711 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:47:29.476430 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:47:29.494899 1128548 provision.go:87] duration metric: took 263.788812ms to configureAuth
	I1208 01:47:29.494927 1128548 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:47:29.495124 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:29.495136 1128548 machine.go:97] duration metric: took 3.798277669s to provisionDockerMachine
	I1208 01:47:29.495144 1128548 start.go:293] postStartSetup for "no-preload-536520" (driver="docker")
	I1208 01:47:29.495155 1128548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:47:29.495213 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:47:29.495257 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.514045 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.618827 1128548 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:47:29.622435 1128548 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:47:29.622486 1128548 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:47:29.622498 1128548 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:47:29.622555 1128548 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:47:29.622644 1128548 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:47:29.622753 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:47:29.630547 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:47:29.648755 1128548 start.go:296] duration metric: took 153.595239ms for postStartSetup
	I1208 01:47:29.648836 1128548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:47:29.648887 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.666163 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.767705 1128548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:47:29.772600 1128548 fix.go:56] duration metric: took 4.405531603s for fixHost
	I1208 01:47:29.772626 1128548 start.go:83] releasing machines lock for "no-preload-536520", held for 4.405586815s
	I1208 01:47:29.772706 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:29.789841 1128548 ssh_runner.go:195] Run: cat /version.json
	I1208 01:47:29.789937 1128548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:47:29.789945 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.789996 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.812717 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.816251 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:30.016925 1128548 ssh_runner.go:195] Run: systemctl --version
	I1208 01:47:30.052830 1128548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:47:30.068491 1128548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:47:30.068623 1128548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:47:30.091127 1128548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:47:30.091213 1128548 start.go:496] detecting cgroup driver to use...
	I1208 01:47:30.092843 1128548 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:47:30.092996 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:47:30.118949 1128548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:47:30.135926 1128548 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:47:30.136004 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:47:30.154610 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:47:30.169484 1128548 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:47:30.284887 1128548 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:47:30.401821 1128548 docker.go:234] disabling docker service ...
	I1208 01:47:30.401907 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:47:30.417825 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:47:30.431329 1128548 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:47:30.549687 1128548 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:47:30.701450 1128548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:47:30.714761 1128548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:47:30.729964 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:47:30.740571 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:47:30.749764 1128548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:47:30.749912 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:47:30.759343 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:47:30.768528 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:47:30.777543 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:47:30.786277 1128548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:47:30.795137 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:47:30.804067 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:47:30.812868 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:47:30.821824 1128548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:47:30.829778 1128548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:47:30.837363 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:30.946320 1128548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:47:31.048379 1128548 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:47:31.048491 1128548 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:47:31.052641 1128548 start.go:564] Will wait 60s for crictl version
	I1208 01:47:31.052748 1128548 ssh_runner.go:195] Run: which crictl
	I1208 01:47:31.056733 1128548 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:47:31.082010 1128548 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:47:31.082131 1128548 ssh_runner.go:195] Run: containerd --version
	I1208 01:47:31.107752 1128548 ssh_runner.go:195] Run: containerd --version
	I1208 01:47:31.134903 1128548 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:47:31.137907 1128548 cli_runner.go:164] Run: docker network inspect no-preload-536520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:47:31.155436 1128548 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:47:31.159751 1128548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:47:31.170770 1128548 kubeadm.go:884] updating cluster {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:47:31.170896 1128548 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:47:31.170961 1128548 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:47:31.196822 1128548 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:47:31.196850 1128548 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:47:31.196858 1128548 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:47:31.196959 1128548 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-536520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:47:31.197036 1128548 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:47:31.226544 1128548 cni.go:84] Creating CNI manager for ""
	I1208 01:47:31.226567 1128548 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:47:31.226586 1128548 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:47:31.226628 1128548 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-536520 NodeName:no-preload-536520 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:47:31.226797 1128548 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-536520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:47:31.226877 1128548 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:47:31.234989 1128548 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:47:31.235078 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:47:31.242869 1128548 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:47:31.261281 1128548 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:47:31.274779 1128548 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 01:47:31.288252 1128548 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:47:31.292107 1128548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:47:31.302333 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:31.430216 1128548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:47:31.454999 1128548 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520 for IP: 192.168.85.2
	I1208 01:47:31.455023 1128548 certs.go:195] generating shared ca certs ...
	I1208 01:47:31.455040 1128548 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:31.455242 1128548 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:47:31.455311 1128548 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:47:31.455324 1128548 certs.go:257] generating profile certs ...
	I1208 01:47:31.456430 1128548 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.key
	I1208 01:47:31.456527 1128548 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035
	I1208 01:47:31.456618 1128548 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key
	I1208 01:47:31.456780 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:47:31.456840 1128548 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:47:31.456857 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:47:31.456908 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:47:31.457070 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:47:31.457132 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:47:31.457218 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:47:31.457887 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:47:31.480298 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:47:31.500772 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:47:31.521065 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:47:31.540174 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:47:31.558342 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:47:31.576697 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:47:31.595322 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:47:31.613726 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:47:31.631953 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:47:31.650290 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:47:31.669394 1128548 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:47:31.682263 1128548 ssh_runner.go:195] Run: openssl version
	I1208 01:47:31.688781 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.696556 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:47:31.704170 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.707971 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.708038 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.749348 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:47:31.756878 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.764251 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:47:31.771591 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.775408 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.775525 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.817721 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:47:31.825386 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.832732 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:47:31.840622 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.844445 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.844541 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.885480 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:47:31.892792 1128548 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:47:31.896494 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:47:31.937567 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:47:31.978872 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:47:32.020623 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:47:32.062625 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:47:32.104715 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:47:32.148960 1128548 kubeadm.go:401] StartCluster: {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:32.149129 1128548 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:47:32.149249 1128548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:47:32.177182 1128548 cri.go:89] found id: ""
	I1208 01:47:32.177302 1128548 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:47:32.185467 1128548 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:47:32.185537 1128548 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:47:32.185605 1128548 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:47:32.193198 1128548 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:47:32.193631 1128548 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:32.193734 1128548 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-536520" cluster setting kubeconfig missing "no-preload-536520" context setting]
	I1208 01:47:32.194046 1128548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.195334 1128548 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:47:32.203280 1128548 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:47:32.203313 1128548 kubeadm.go:602] duration metric: took 17.762571ms to restartPrimaryControlPlane
	I1208 01:47:32.203323 1128548 kubeadm.go:403] duration metric: took 54.376484ms to StartCluster
	I1208 01:47:32.203356 1128548 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.203428 1128548 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:32.204022 1128548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.204232 1128548 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:47:32.204520 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:32.204571 1128548 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:47:32.204638 1128548 addons.go:70] Setting storage-provisioner=true in profile "no-preload-536520"
	I1208 01:47:32.204677 1128548 addons.go:239] Setting addon storage-provisioner=true in "no-preload-536520"
	I1208 01:47:32.204699 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.205163 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.205504 1128548 addons.go:70] Setting default-storageclass=true in profile "no-preload-536520"
	I1208 01:47:32.205524 1128548 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-536520"
	I1208 01:47:32.205784 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.205965 1128548 addons.go:70] Setting dashboard=true in profile "no-preload-536520"
	I1208 01:47:32.205979 1128548 addons.go:239] Setting addon dashboard=true in "no-preload-536520"
	W1208 01:47:32.205986 1128548 addons.go:248] addon dashboard should already be in state true
	I1208 01:47:32.206006 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.206434 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.211155 1128548 out.go:179] * Verifying Kubernetes components...
	I1208 01:47:32.214232 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:32.243438 1128548 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:47:32.247673 1128548 addons.go:239] Setting addon default-storageclass=true in "no-preload-536520"
	I1208 01:47:32.247722 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.248146 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.248283 1128548 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:47:32.249393 1128548 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:47:32.249422 1128548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:47:32.249479 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.254001 1128548 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:47:32.258300 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:47:32.258329 1128548 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:47:32.258396 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.286961 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.294498 1128548 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:32.294541 1128548 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:47:32.294621 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.308511 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.336182 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.455434 1128548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:47:32.503421 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:47:32.512052 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:47:32.512091 1128548 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:47:32.529651 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:47:32.529679 1128548 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:47:32.538583 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:32.564793 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:47:32.564828 1128548 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:47:32.600166 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:47:32.600234 1128548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:47:32.620391 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:47:32.620414 1128548 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:47:32.635342 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:47:32.635368 1128548 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:47:32.649355 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:47:32.649379 1128548 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:47:32.663687 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:47:32.663714 1128548 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:47:32.677279 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:32.677303 1128548 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:47:32.690968 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:33.092759 1128548 node_ready.go:35] waiting up to 6m0s for node "no-preload-536520" to be "Ready" ...
	W1208 01:47:33.093211 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093277 1128548 retry.go:31] will retry after 135.377583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.093354 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093401 1128548 retry.go:31] will retry after 356.085059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.093693 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093739 1128548 retry.go:31] will retry after 290.352829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.229413 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:33.295712 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.295753 1128548 retry.go:31] will retry after 504.528201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.385144 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:33.450468 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:33.455498 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.455579 1128548 retry.go:31] will retry after 210.308534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.513454 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.513515 1128548 retry.go:31] will retry after 261.594769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.666341 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:33.730275 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.730367 1128548 retry.go:31] will retry after 515.285755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.775591 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:33.801214 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:33.874343 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.874382 1128548 retry.go:31] will retry after 373.513153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.879699 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.879734 1128548 retry.go:31] will retry after 640.492075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.246844 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:34.248194 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:34.323597 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.323634 1128548 retry.go:31] will retry after 1.019529809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:34.339043 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.339089 1128548 retry.go:31] will retry after 1.209309516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.520466 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:34.578496 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.578579 1128548 retry.go:31] will retry after 641.799617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:35.094210 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:35.220879 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:35.291769 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.291804 1128548 retry.go:31] will retry after 1.824974972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.343984 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:35.420724 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.420761 1128548 retry.go:31] will retry after 1.505282353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.548906 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:35.619439 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.619474 1128548 retry.go:31] will retry after 1.475994436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:36.927068 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:36.990909 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:36.990950 1128548 retry.go:31] will retry after 1.384042047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.095678 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:37.117700 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:37.162835 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.162944 1128548 retry.go:31] will retry after 2.706380277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:37.196118 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.196155 1128548 retry.go:31] will retry after 2.546989667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:37.593980 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:38.375563 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:38.448945 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:38.448983 1128548 retry.go:31] will retry after 4.228344134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.743778 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:39.805386 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.805423 1128548 retry.go:31] will retry after 1.941295739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.869521 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:39.940830 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.940861 1128548 retry.go:31] will retry after 1.677329859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:40.093478 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:41.618647 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:41.680452 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.680482 1128548 retry.go:31] will retry after 3.415857651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.747716 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:41.810998 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.811032 1128548 retry.go:31] will retry after 4.001958095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:42.593808 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:42.678234 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:42.741215 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:42.741256 1128548 retry.go:31] will retry after 4.935696048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:45.093503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:45.096924 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:45.182970 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.183006 1128548 retry.go:31] will retry after 5.169461339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.814151 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:45.895671 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.895706 1128548 retry.go:31] will retry after 6.069460108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:47.593429 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:47.677825 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:47.741017 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:47.741053 1128548 retry.go:31] will retry after 6.358930969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:50.093389 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:50.240219 1121810 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005281202s
	I1208 01:47:50.240251 1121810 kubeadm.go:319] 
	I1208 01:47:50.240305 1121810 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:47:50.240337 1121810 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:47:50.240436 1121810 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:47:50.240441 1121810 kubeadm.go:319] 
	I1208 01:47:50.240540 1121810 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:47:50.240570 1121810 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:47:50.240599 1121810 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:47:50.240604 1121810 kubeadm.go:319] 
	I1208 01:47:50.244623 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:47:50.245144 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:47:50.245269 1121810 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:47:50.245523 1121810 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:47:50.245536 1121810 kubeadm.go:319] 
	I1208 01:47:50.245606 1121810 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:47:50.245750 1121810 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005281202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:47:50.245846 1121810 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 01:47:50.662033 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:47:50.675434 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:47:50.675505 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:47:50.683543 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:47:50.683562 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:47:50.683614 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:47:50.691591 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:47:50.691654 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:47:50.699350 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:47:50.707376 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:47:50.707443 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:47:50.715135 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:47:50.723172 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:47:50.723260 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:47:50.731275 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:47:50.739267 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:47:50.739332 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:47:50.747081 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:47:50.787448 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:47:50.787678 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:47:50.866311 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:47:50.866389 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:47:50.866433 1121810 kubeadm.go:319] OS: Linux
	I1208 01:47:50.866508 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:47:50.866562 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:47:50.866613 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:47:50.866667 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:47:50.866720 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:47:50.866772 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:47:50.866821 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:47:50.866871 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:47:50.866921 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:47:50.933039 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:47:50.933171 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:47:50.933266 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:47:50.942872 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:47:50.948105 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:47:50.948200 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:47:50.948265 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:47:50.948342 1121810 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:47:50.948403 1121810 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:47:50.948473 1121810 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:47:50.948531 1121810 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:47:50.948594 1121810 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:47:50.948661 1121810 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:47:50.948735 1121810 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:47:50.948807 1121810 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:47:50.948845 1121810 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:47:50.948901 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:47:51.112853 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:47:51.634368 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:47:51.809543 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:47:52.224203 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:47:52.422413 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:47:52.423095 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:47:52.425801 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:47:52.429055 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:47:52.429171 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:47:52.429263 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:47:52.429335 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:47:52.449800 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:47:52.449912 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:47:52.458225 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:47:52.458717 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:47:52.458770 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:47:52.594110 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:47:52.594228 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:47:50.353033 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:50.436848 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:50.436881 1128548 retry.go:31] will retry after 13.13295311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:51.966065 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:52.046307 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:52.046338 1128548 retry.go:31] will retry after 13.071324249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:52.093819 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:47:54.094117 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:54.100443 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:54.199978 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:54.200013 1128548 retry.go:31] will retry after 5.921409717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:56.593485 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:47:58.594917 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:00.169117 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:00.465737 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:00.465774 1128548 retry.go:31] will retry after 20.435782348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:01.093648 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:03.570310 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:03.593451 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:03.632880 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:03.632912 1128548 retry.go:31] will retry after 21.217435615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:05.117897 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:05.178316 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:05.178352 1128548 retry.go:31] will retry after 19.478477459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:05.594138 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:08.093454 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:10.593499 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:13.093411 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:15.093543 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:17.593421 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:20.093502 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:20.902007 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:20.961724 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:20.961758 1128548 retry.go:31] will retry after 19.271074882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:22.593561 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:24.657892 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:24.716774 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.716812 1128548 retry.go:31] will retry after 21.882989692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.851274 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:24.908152 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.908186 1128548 retry.go:31] will retry after 13.56417867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:25.093911 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:27.593411 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:29.594271 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:32.093606 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:34.094178 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:36.593455 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:38.472558 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:38.531541 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:38.531575 1128548 retry.go:31] will retry after 35.735118355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:38.593962 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:40.233963 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:40.295686 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:40.295723 1128548 retry.go:31] will retry after 24.954393837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:41.093636 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:43.094034 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:45.593528 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:46.601180 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:46.663105 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:46.663141 1128548 retry.go:31] will retry after 26.276311259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:47.594156 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:50.093521 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:52.593610 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:55.093548 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:57.094350 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:59.593344 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:01.593410 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:03.594090 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:49:05.250844 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:49:05.313013 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:05.313123 1128548 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:49:06.093980 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:08.094120 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:10.094394 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:12.593567 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:49:12.940260 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:49:12.998388 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:12.998515 1128548 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:49:14.267569 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:49:14.332970 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:14.333076 1128548 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:49:14.337901 1128548 out.go:179] * Enabled addons: 
	I1208 01:49:14.340893 1128548 addons.go:530] duration metric: took 1m42.136312022s for enable addons: enabled=[]
	W1208 01:49:14.593648 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:17.093978 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:19.094217 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:21.593630 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:24.093459 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:26.094330 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:28.593722 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:31.093470 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:33.593451 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:36.093495 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:38.593385 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:40.593647 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:43.093371 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:45.094369 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:47.593619 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:50.093597 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:52.093757 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:54.094300 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:56.593367 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:58.593542 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:00.593897 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:03.093529 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:05.593439 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:07.593566 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:10.093546 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:12.595550 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:15.093478 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:17.593462 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:20.093365 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:22.093598 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:24.094259 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:26.094930 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:28.594300 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:31.093474 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:33.593407 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:35.593599 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:38.093412 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:40.094354 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:42.593822 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:44.594150 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:47.093329 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:49.093541 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:51.593381 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:53.593623 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:56.093365 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:58.093467 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:00.093554 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:02.094325 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:04.593608 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:06.594220 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:09.093557 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:11.593588 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:13.594210 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:16.093513 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:18.593414 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:20.593693 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:22.594088 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:24.594405 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:27.093352 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:29.094118 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:31.593494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:33.605055 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:36.093573 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:38.593473 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:40.593558 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:43.093868 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:45.094213 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:47.593628 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:49.594227 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:51:52.594163 1121810 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000318083s
	I1208 01:51:52.594189 1121810 kubeadm.go:319] 
	I1208 01:51:52.594247 1121810 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:51:52.594280 1121810 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:51:52.594385 1121810 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:51:52.594389 1121810 kubeadm.go:319] 
	I1208 01:51:52.594514 1121810 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:51:52.594548 1121810 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:51:52.594578 1121810 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:51:52.594582 1121810 kubeadm.go:319] 
	I1208 01:51:52.598647 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:51:52.599081 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:51:52.599190 1121810 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:51:52.599423 1121810 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:51:52.599429 1121810 kubeadm.go:319] 
	I1208 01:51:52.599545 1121810 kubeadm.go:403] duration metric: took 8m7.029694705s to StartCluster
	I1208 01:51:52.599580 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:51:52.599643 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:51:52.599710 1121810 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:51:52.631244 1121810 cri.go:89] found id: ""
	I1208 01:51:52.631271 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.631280 1121810 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:51:52.631288 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:51:52.631353 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:51:52.658412 1121810 cri.go:89] found id: ""
	I1208 01:51:52.658492 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.658502 1121810 logs.go:284] No container was found matching "etcd"
	I1208 01:51:52.658519 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:51:52.658610 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:51:52.685789 1121810 cri.go:89] found id: ""
	I1208 01:51:52.685814 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.685823 1121810 logs.go:284] No container was found matching "coredns"
	I1208 01:51:52.685829 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:51:52.685887 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:51:52.713200 1121810 cri.go:89] found id: ""
	I1208 01:51:52.713226 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.713235 1121810 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:51:52.713241 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:51:52.713299 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:51:52.737730 1121810 cri.go:89] found id: ""
	I1208 01:51:52.737756 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.737765 1121810 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:51:52.737771 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:51:52.737829 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:51:52.763894 1121810 cri.go:89] found id: ""
	I1208 01:51:52.763928 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.763937 1121810 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:51:52.763944 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:51:52.764012 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:51:52.788698 1121810 cri.go:89] found id: ""
	I1208 01:51:52.788762 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.788777 1121810 logs.go:284] No container was found matching "kindnet"
	I1208 01:51:52.788788 1121810 logs.go:123] Gathering logs for kubelet ...
	I1208 01:51:52.788799 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:51:52.846920 1121810 logs.go:123] Gathering logs for dmesg ...
	I1208 01:51:52.846956 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:51:52.862048 1121810 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:51:52.862076 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:51:52.931760 1121810 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:51:52.923107    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.923875    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.925579    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.926258    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.927897    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:51:52.923107    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.923875    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.925579    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.926258    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.927897    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:51:52.931799 1121810 logs.go:123] Gathering logs for containerd ...
	I1208 01:51:52.931812 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:51:52.970819 1121810 logs.go:123] Gathering logs for container status ...
	I1208 01:51:52.970855 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1208 01:51:53.000445 1121810 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:51:53.000503 1121810 out.go:285] * 
	W1208 01:51:53.000560 1121810 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:51:53.000578 1121810 out.go:285] * 
	W1208 01:51:53.002833 1121810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:51:53.009552 1121810 out.go:203] 
	W1208 01:51:53.012504 1121810 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:51:53.012580 1121810 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:51:53.012606 1121810 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:51:53.015855 1121810 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432724135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432797876Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432915423Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432985135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433045279Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433112291Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433175315Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433234597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433301470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433386049Z" level=info msg="Connect containerd service"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.434110863Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.435147886Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.446860134Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.447070458Z" level=info msg="Start recovering state"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.447071435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.447339736Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482231401Z" level=info msg="Start event monitor"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482425824Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482616693Z" level=info msg="Start streaming server"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482689407Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482749281Z" level=info msg="runtime interface starting up..."
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482802434Z" level=info msg="starting plugins..."
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482868026Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:43:43 newest-cni-457779 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.484885964Z" level=info msg="containerd successfully booted in 0.079491s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:51:54.195232    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:54.195932    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:54.197643    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:54.198279    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:54.199999    4988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:51:54 up  6:34,  0 user,  load average: 0.44, 0.72, 1.50
	Linux newest-cni-457779 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:51:50 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:51:51 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 08 01:51:51 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:51 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:51 newest-cni-457779 kubelet[4801]: E1208 01:51:51.646861    4801 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:51:51 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:51:51 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:51:52 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 08 01:51:52 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:52 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:52 newest-cni-457779 kubelet[4807]: E1208 01:51:52.401568    4807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:51:52 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:51:52 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:51:53 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 01:51:53 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:53 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:53 newest-cni-457779 kubelet[4889]: E1208 01:51:53.259075    4889 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:51:53 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:51:53 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:51:54 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 01:51:54 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:54 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:51:54 newest-cni-457779 kubelet[4992]: E1208 01:51:54.171996    4992 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:51:54 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:51:54 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 6 (330.839594ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:51:54.700276 1134464 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-457779" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (499.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-536520 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-536520 create -f testdata/busybox.yaml: exit status 1 (58.297743ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-536520" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-536520 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1097222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:37:08.305644912Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d978b7ec933dfaa3a40373d30ab4c31d838283a17009d633c2f2575fe0d2fa01",
	            "SandboxKey": "/var/run/docker/netns/d978b7ec933d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:a5:95:c9:47:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "169f4c96797fb11af0a5eb9b81855033f528e1a5dc4666f7c0ac0ae34794695b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 6 (324.495995ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:45:38.640842 1125715 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-895688 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ stop    │ -p embed-certs-719683 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:43:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:43:34.815729 1121810 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:34.815855 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.815867 1121810 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:34.815872 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.816138 1121810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:43:34.816580 1121810 out.go:368] Setting JSON to false
	I1208 01:43:34.817458 1121810 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23168,"bootTime":1765135047,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:43:34.817532 1121810 start.go:143] virtualization:  
	I1208 01:43:34.821576 1121810 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:43:34.825841 1121810 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:43:34.825977 1121810 notify.go:221] Checking for updates...
	I1208 01:43:34.832259 1121810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:43:34.835279 1121810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:43:34.841305 1121810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:43:34.844728 1121810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:43:34.847821 1121810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:43:34.851422 1121810 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:34.851605 1121810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:43:34.879485 1121810 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:43:34.879731 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:34.965413 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:34.956183464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:34.965520 1121810 docker.go:319] overlay module found
	I1208 01:43:34.968791 1121810 out.go:179] * Using the docker driver based on user configuration
	I1208 01:43:34.971755 1121810 start.go:309] selected driver: docker
	I1208 01:43:34.971774 1121810 start.go:927] validating driver "docker" against <nil>
	I1208 01:43:34.971788 1121810 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:43:34.972547 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:35.028561 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:35.017550524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:35.028722 1121810 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:43:35.028754 1121810 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:43:35.029019 1121810 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:43:35.032208 1121810 out.go:179] * Using Docker driver with root privileges
	I1208 01:43:35.035049 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:35.035123 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:35.035136 1121810 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:43:35.035233 1121810 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:35.038520 1121810 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:43:35.041429 1121810 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:43:35.044577 1121810 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:43:35.047363 1121810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:43:35.047458 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.047540 1121810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:43:35.047549 1121810 cache.go:65] Caching tarball of preloaded images
	I1208 01:43:35.047628 1121810 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:43:35.047639 1121810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:43:35.047753 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:35.047771 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json: {Name:mk01a58f99ac25ab3f8420cd37e5943e99ab0d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:35.067817 1121810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:43:35.067841 1121810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:43:35.067860 1121810 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:43:35.067891 1121810 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:43:35.068006 1121810 start.go:364] duration metric: took 93.999µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:43:35.068037 1121810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:43:35.068112 1121810 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:43:35.071522 1121810 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:43:35.071830 1121810 start.go:159] libmachine.API.Create for "newest-cni-457779" (driver="docker")
	I1208 01:43:35.071875 1121810 client.go:173] LocalClient.Create starting
	I1208 01:43:35.072009 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:43:35.072049 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072066 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072149 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:43:35.072168 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072180 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072559 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:43:35.089783 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:43:35.089866 1121810 network_create.go:284] running [docker network inspect newest-cni-457779] to gather additional debugging logs...
	I1208 01:43:35.089892 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779
	W1208 01:43:35.107018 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 returned with exit code 1
	I1208 01:43:35.107053 1121810 network_create.go:287] error running [docker network inspect newest-cni-457779]: docker network inspect newest-cni-457779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-457779 not found
	I1208 01:43:35.107080 1121810 network_create.go:289] output of [docker network inspect newest-cni-457779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-457779 not found
	
	** /stderr **
	I1208 01:43:35.107206 1121810 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:35.124993 1121810 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:43:35.125469 1121810 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:43:35.125932 1121810 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:43:35.126507 1121810 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05be0}
	I1208 01:43:35.126532 1121810 network_create.go:124] attempt to create docker network newest-cni-457779 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:43:35.126598 1121810 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-457779 newest-cni-457779
	I1208 01:43:35.185488 1121810 network_create.go:108] docker network newest-cni-457779 192.168.76.0/24 created
	I1208 01:43:35.185521 1121810 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-457779" container
	I1208 01:43:35.185613 1121810 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:43:35.202705 1121810 cli_runner.go:164] Run: docker volume create newest-cni-457779 --label name.minikube.sigs.k8s.io=newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:43:35.222719 1121810 oci.go:103] Successfully created a docker volume newest-cni-457779
	I1208 01:43:35.222808 1121810 cli_runner.go:164] Run: docker run --rm --name newest-cni-457779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --entrypoint /usr/bin/test -v newest-cni-457779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:43:35.763230 1121810 oci.go:107] Successfully prepared a docker volume newest-cni-457779
	I1208 01:43:35.763299 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.763314 1121810 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:43:35.763383 1121810 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:43:39.703637 1121810 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.940199388s)
	I1208 01:43:39.703674 1121810 kic.go:203] duration metric: took 3.940356568s to extract preloaded images to volume ...
	W1208 01:43:39.703822 1121810 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:43:39.703938 1121810 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:43:39.755193 1121810 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-457779 --name newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-457779 --network newest-cni-457779 --ip 192.168.76.2 --volume newest-cni-457779:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:43:40.098178 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Running}}
	I1208 01:43:40.139995 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.170386 1121810 cli_runner.go:164] Run: docker exec newest-cni-457779 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:43:40.229771 1121810 oci.go:144] the created container "newest-cni-457779" has a running status.
	I1208 01:43:40.229802 1121810 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa...
	I1208 01:43:40.697110 1121810 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:43:40.726929 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.749136 1121810 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:43:40.749165 1121810 kic_runner.go:114] Args: [docker exec --privileged newest-cni-457779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:43:40.813461 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.840725 1121810 machine.go:94] provisionDockerMachine start ...
	I1208 01:43:40.840842 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:40.867802 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:40.868157 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:40.868172 1121810 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:43:41.095667 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.095764 1121810 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:43:41.095876 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.120122 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.120469 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.120480 1121810 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:43:41.290623 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.290789 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.311253 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.311570 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.311587 1121810 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:43:41.483218 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:43:41.483251 1121810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:43:41.483283 1121810 ubuntu.go:190] setting up certificates
	I1208 01:43:41.483304 1121810 provision.go:84] configureAuth start
	I1208 01:43:41.483379 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:41.501594 1121810 provision.go:143] copyHostCerts
	I1208 01:43:41.501670 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:43:41.501684 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:43:41.501765 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:43:41.501870 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:43:41.501882 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:43:41.501911 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:43:41.501965 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:43:41.501974 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:43:41.501997 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:43:41.502054 1121810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:43:41.701737 1121810 provision.go:177] copyRemoteCerts
	I1208 01:43:41.701810 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:43:41.701853 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.719228 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:41.826667 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:43:41.845605 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:43:41.864446 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:43:41.883477 1121810 provision.go:87] duration metric: took 400.143683ms to configureAuth
	I1208 01:43:41.883508 1121810 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:43:41.883715 1121810 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:41.883727 1121810 machine.go:97] duration metric: took 1.042983827s to provisionDockerMachine
	I1208 01:43:41.883734 1121810 client.go:176] duration metric: took 6.811847736s to LocalClient.Create
	I1208 01:43:41.883755 1121810 start.go:167] duration metric: took 6.811927679s to libmachine.API.Create "newest-cni-457779"
	I1208 01:43:41.883766 1121810 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:43:41.883777 1121810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:43:41.883842 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:43:41.883884 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.901984 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.009332 1121810 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:43:42.014632 1121810 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:43:42.014671 1121810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:43:42.014684 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:43:42.014745 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:43:42.014838 1121810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:43:42.014945 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:43:42.027153 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:42.048521 1121810 start.go:296] duration metric: took 164.740218ms for postStartSetup
	I1208 01:43:42.048996 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.068188 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:42.068517 1121810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:43:42.068578 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.089135 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.196671 1121810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:43:42.202509 1121810 start.go:128] duration metric: took 7.134380905s to createHost
	I1208 01:43:42.202544 1121810 start.go:83] releasing machines lock for "newest-cni-457779", held for 7.134523987s
	I1208 01:43:42.202651 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.225469 1121810 ssh_runner.go:195] Run: cat /version.json
	I1208 01:43:42.225530 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.225555 1121810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:43:42.225620 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.248192 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.252198 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.354818 1121810 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:42.448479 1121810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:43:42.453374 1121810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:43:42.453474 1121810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:43:42.481420 1121810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:43:42.481454 1121810 start.go:496] detecting cgroup driver to use...
	I1208 01:43:42.481487 1121810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:43:42.481545 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:43:42.497315 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:43:42.510801 1121810 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:43:42.510908 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:43:42.528913 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:43:42.549245 1121810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:43:42.677688 1121810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:43:42.808025 1121810 docker.go:234] disabling docker service ...
	I1208 01:43:42.808134 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:43:42.829668 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:43:42.844784 1121810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:43:42.967423 1121810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:43:43.080509 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:43:43.099271 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:43:43.116361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:43:43.125920 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:43:43.135415 1121810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:43:43.135546 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:43:43.145049 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.154361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:43:43.163282 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.172992 1121810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:43:43.183456 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:43:43.192700 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:43:43.201918 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:43:43.211033 1121810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:43:43.218575 1121810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:43:43.226217 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.336148 1121810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:43:43.485946 1121810 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:43:43.486057 1121810 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:43:43.490114 1121810 start.go:564] Will wait 60s for crictl version
	I1208 01:43:43.490229 1121810 ssh_runner.go:195] Run: which crictl
	I1208 01:43:43.494026 1121810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:43:43.518236 1121810 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:43:43.518359 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.546503 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.572460 1121810 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:43:43.575475 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:43.591594 1121810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:43:43.595521 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.608351 1121810 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:43:43.611336 1121810 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:43:43.611494 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:43.611589 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.637041 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.637067 1121810 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:43:43.637131 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.663968 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.663994 1121810 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:43:43.664003 1121810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:43:43.664106 1121810 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:43:43.664184 1121810 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:43:43.690508 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:43.690535 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:43.690554 1121810 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:43:43.690578 1121810 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:43:43.690708 1121810 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:43:43.690785 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:43:43.698933 1121810 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:43:43.699056 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:43:43.707017 1121810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:43:43.720830 1121810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:43:43.734819 1121810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:43:43.748443 1121810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:43:43.752534 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.763093 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.892382 1121810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:43:43.909690 1121810 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:43:43.909718 1121810 certs.go:195] generating shared ca certs ...
	I1208 01:43:43.909736 1121810 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:43.909947 1121810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:43:43.910028 1121810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:43:43.910042 1121810 certs.go:257] generating profile certs ...
	I1208 01:43:43.910113 1121810 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:43:43.910132 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt with IP's: []
	I1208 01:43:44.271233 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt ...
	I1208 01:43:44.271267 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt: {Name:mka7ec1a9b348db295896c4fbe93c78f0eac2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271468 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key ...
	I1208 01:43:44.271482 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key: {Name:mkc310f4a570315e10c49516c56b2513b55aa651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271582 1121810 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:43:44.271600 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:43:44.830639 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 ...
	I1208 01:43:44.830674 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399: {Name:mk4dcde78303e922dc6fd9b0f86bb4a694f9ca60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830866 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 ...
	I1208 01:43:44.830881 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399: {Name:mk518276ee5546392f5eb2700a48869cb6431589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830967 1121810 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt
	I1208 01:43:44.831050 1121810 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key
	I1208 01:43:44.831119 1121810 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:43:44.831137 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt with IP's: []
	I1208 01:43:44.882804 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt ...
	I1208 01:43:44.882832 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt: {Name:mk7d1a29564431efb40b45d0c303e991b7f53000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883011 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key ...
	I1208 01:43:44.883025 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key: {Name:mk7882d520d12c1dd539975ac85c206b173a5dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883213 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:43:44.883262 1121810 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:43:44.883275 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:43:44.883314 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:43:44.883347 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:43:44.883379 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:43:44.883428 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:44.884017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:43:44.902791 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:43:44.921385 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:43:44.941017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:43:44.959449 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:43:44.976974 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:43:44.995099 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:43:45.050850 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:43:45.098831 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:43:45.141186 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:43:45.192133 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:43:45.239267 1121810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:43:45.262695 1121810 ssh_runner.go:195] Run: openssl version
	I1208 01:43:45.278120 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.292034 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:43:45.303687 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308166 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308244 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.359063 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.376429 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.385683 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.402964 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:43:45.419573 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426601 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426673 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.470891 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:43:45.478953 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:43:45.487067 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.495033 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:43:45.505252 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509187 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509254 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.550865 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:43:45.558694 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:43:45.566191 1121810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:43:45.569801 1121810 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:43:45.569857 1121810 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:45.569950 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:43:45.570007 1121810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:43:45.598900 1121810 cri.go:89] found id: ""
	I1208 01:43:45.598976 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:43:45.606734 1121810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:43:45.614639 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:43:45.614740 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:43:45.622525 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:43:45.622586 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:43:45.622667 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:43:45.630915 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:43:45.631002 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:43:45.638607 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:43:45.646509 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:43:45.646578 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:43:45.654866 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.663136 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:43:45.663229 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.670925 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:43:45.679164 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:43:45.679233 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:43:45.686793 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:43:45.726191 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:43:45.726671 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:43:45.801402 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:43:45.801539 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:43:45.801611 1121810 kubeadm.go:319] OS: Linux
	I1208 01:43:45.801690 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:43:45.801780 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:43:45.801861 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:43:45.801944 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:43:45.802035 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:43:45.802107 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:43:45.802179 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:43:45.802261 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:43:45.802332 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:43:45.874152 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:43:45.874329 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:43:45.874481 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:43:45.879465 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:43:45.885605 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:43:45.885775 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:43:45.885880 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:43:46.228184 1121810 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:43:46.789407 1121810 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:43:46.965778 1121810 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:43:47.194652 1121810 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:43:47.706685 1121810 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:43:47.707058 1121810 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:47.801474 1121810 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:43:47.801936 1121810 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:48.142552 1121810 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:43:48.263003 1121810 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:43:48.445660 1121810 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:43:48.445984 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:43:48.591329 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:43:49.028618 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:43:49.379863 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:43:49.569393 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:43:50.065560 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:43:50.066253 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:43:50.069265 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:43:50.072991 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:43:50.073097 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:43:50.073174 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:43:50.073707 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:43:50.092183 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:43:50.092359 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:43:50.100854 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:43:50.101429 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:43:50.101728 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:43:50.234928 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:43:50.235054 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:45:36.224252 1096912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202843s
	I1208 01:45:36.224297 1096912 kubeadm.go:319] 
	I1208 01:45:36.224376 1096912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:45:36.224412 1096912 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:45:36.224526 1096912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:45:36.224533 1096912 kubeadm.go:319] 
	I1208 01:45:36.224650 1096912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:45:36.224695 1096912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:45:36.224737 1096912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:45:36.224744 1096912 kubeadm.go:319] 
	I1208 01:45:36.229514 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:45:36.229948 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:45:36.230084 1096912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:45:36.230325 1096912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:45:36.230339 1096912 kubeadm.go:319] 
	I1208 01:45:36.230417 1096912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:45:36.230499 1096912 kubeadm.go:403] duration metric: took 8m6.827986586s to StartCluster
	I1208 01:45:36.230540 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:45:36.230607 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:45:36.261525 1096912 cri.go:89] found id: ""
	I1208 01:45:36.261550 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.261560 1096912 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:45:36.261567 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:45:36.261627 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:45:36.290275 1096912 cri.go:89] found id: ""
	I1208 01:45:36.290298 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.290307 1096912 logs.go:284] No container was found matching "etcd"
	I1208 01:45:36.290313 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:45:36.290373 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:45:36.317513 1096912 cri.go:89] found id: ""
	I1208 01:45:36.317543 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.317552 1096912 logs.go:284] No container was found matching "coredns"
	I1208 01:45:36.317559 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:45:36.317626 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:45:36.344794 1096912 cri.go:89] found id: ""
	I1208 01:45:36.344818 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.344827 1096912 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:45:36.344834 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:45:36.344896 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:45:36.374203 1096912 cri.go:89] found id: ""
	I1208 01:45:36.374231 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.374239 1096912 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:45:36.374246 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:45:36.374305 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:45:36.400248 1096912 cri.go:89] found id: ""
	I1208 01:45:36.400280 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.400291 1096912 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:45:36.400299 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:45:36.400360 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:45:36.425166 1096912 cri.go:89] found id: ""
	I1208 01:45:36.425190 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.425203 1096912 logs.go:284] No container was found matching "kindnet"
	I1208 01:45:36.425213 1096912 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:45:36.425226 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:45:36.489026 1096912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:45:36.489050 1096912 logs.go:123] Gathering logs for containerd ...
	I1208 01:45:36.489063 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:45:36.530813 1096912 logs.go:123] Gathering logs for container status ...
	I1208 01:45:36.530852 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:45:36.565171 1096912 logs.go:123] Gathering logs for kubelet ...
	I1208 01:45:36.565198 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:45:36.637319 1096912 logs.go:123] Gathering logs for dmesg ...
	I1208 01:45:36.637363 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:45:36.667302 1096912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:45:36.667423 1096912 out.go:285] * 
	W1208 01:45:36.667679 1096912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.667856 1096912 out.go:285] * 
	W1208 01:45:36.670553 1096912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:45:36.675751 1096912 out.go:203] 
	W1208 01:45:36.678675 1096912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.678951 1096912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:45:36.679016 1096912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:45:36.683776 1096912 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:37:18 no-preload-536520 containerd[758]: time="2025-12-08T01:37:18.211674218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.246725505Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.249013236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267094705Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267745122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.258971098Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.261089965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.269499475Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.270600866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.761254121Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.764004657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.776242422Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.786248661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.366835429Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.370354521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.377378157Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.378171320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.504878516Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.507125147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.516478294Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.517412037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.869447727Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.871885080Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.880678132Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.881176860Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:39.326919    5714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:39.327461    5714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:39.329218    5714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:39.330041    5714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:39.331751    5714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:45:39 up  6:28,  0 user,  load average: 0.53, 1.71, 2.08
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:45:35 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:36 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 08 01:45:36 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:36 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:36 no-preload-536520 kubelet[5471]: E1208 01:45:36.729128    5471 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:36 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:36 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 08 01:45:37 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:37 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:37 no-preload-536520 kubelet[5497]: E1208 01:45:37.418426    5497 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 kubelet[5600]: E1208 01:45:38.184879    5600 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 kubelet[5629]: E1208 01:45:38.946723    5629 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 6 (378.501836ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:45:39.809340 1125933 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1097222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:37:08.305644912Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d978b7ec933dfaa3a40373d30ab4c31d838283a17009d633c2f2575fe0d2fa01",
	            "SandboxKey": "/var/run/docker/netns/d978b7ec933d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:a5:95:c9:47:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "169f4c96797fb11af0a5eb9b81855033f528e1a5dc4666f7c0ac0ae34794695b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 6 (356.915604ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:45:40.186702 1126021 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-895688 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ stop    │ -p embed-certs-719683 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:43:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:43:34.815729 1121810 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:34.815855 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.815867 1121810 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:34.815872 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.816138 1121810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:43:34.816580 1121810 out.go:368] Setting JSON to false
	I1208 01:43:34.817458 1121810 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23168,"bootTime":1765135047,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:43:34.817532 1121810 start.go:143] virtualization:  
	I1208 01:43:34.821576 1121810 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:43:34.825841 1121810 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:43:34.825977 1121810 notify.go:221] Checking for updates...
	I1208 01:43:34.832259 1121810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:43:34.835279 1121810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:43:34.841305 1121810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:43:34.844728 1121810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:43:34.847821 1121810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:43:34.851422 1121810 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:34.851605 1121810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:43:34.879485 1121810 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:43:34.879731 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:34.965413 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:34.956183464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:34.965520 1121810 docker.go:319] overlay module found
	I1208 01:43:34.968791 1121810 out.go:179] * Using the docker driver based on user configuration
	I1208 01:43:34.971755 1121810 start.go:309] selected driver: docker
	I1208 01:43:34.971774 1121810 start.go:927] validating driver "docker" against <nil>
	I1208 01:43:34.971788 1121810 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:43:34.972547 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:35.028561 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:35.017550524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:35.028722 1121810 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:43:35.028754 1121810 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:43:35.029019 1121810 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:43:35.032208 1121810 out.go:179] * Using Docker driver with root privileges
	I1208 01:43:35.035049 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:35.035123 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:35.035136 1121810 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:43:35.035233 1121810 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:35.038520 1121810 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:43:35.041429 1121810 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:43:35.044577 1121810 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:43:35.047363 1121810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:43:35.047458 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.047540 1121810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:43:35.047549 1121810 cache.go:65] Caching tarball of preloaded images
	I1208 01:43:35.047628 1121810 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:43:35.047639 1121810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:43:35.047753 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:35.047771 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json: {Name:mk01a58f99ac25ab3f8420cd37e5943e99ab0d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:35.067817 1121810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:43:35.067841 1121810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:43:35.067860 1121810 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:43:35.067891 1121810 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:43:35.068006 1121810 start.go:364] duration metric: took 93.999µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:43:35.068037 1121810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:43:35.068112 1121810 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:43:35.071522 1121810 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:43:35.071830 1121810 start.go:159] libmachine.API.Create for "newest-cni-457779" (driver="docker")
	I1208 01:43:35.071875 1121810 client.go:173] LocalClient.Create starting
	I1208 01:43:35.072009 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:43:35.072049 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072066 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072149 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:43:35.072168 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072180 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072559 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:43:35.089783 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:43:35.089866 1121810 network_create.go:284] running [docker network inspect newest-cni-457779] to gather additional debugging logs...
	I1208 01:43:35.089892 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779
	W1208 01:43:35.107018 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 returned with exit code 1
	I1208 01:43:35.107053 1121810 network_create.go:287] error running [docker network inspect newest-cni-457779]: docker network inspect newest-cni-457779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-457779 not found
	I1208 01:43:35.107080 1121810 network_create.go:289] output of [docker network inspect newest-cni-457779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-457779 not found
	
	** /stderr **
	I1208 01:43:35.107206 1121810 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:35.124993 1121810 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:43:35.125469 1121810 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:43:35.125932 1121810 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:43:35.126507 1121810 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05be0}
	I1208 01:43:35.126532 1121810 network_create.go:124] attempt to create docker network newest-cni-457779 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:43:35.126598 1121810 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-457779 newest-cni-457779
	I1208 01:43:35.185488 1121810 network_create.go:108] docker network newest-cni-457779 192.168.76.0/24 created
	I1208 01:43:35.185521 1121810 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-457779" container
	I1208 01:43:35.185613 1121810 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:43:35.202705 1121810 cli_runner.go:164] Run: docker volume create newest-cni-457779 --label name.minikube.sigs.k8s.io=newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:43:35.222719 1121810 oci.go:103] Successfully created a docker volume newest-cni-457779
	I1208 01:43:35.222808 1121810 cli_runner.go:164] Run: docker run --rm --name newest-cni-457779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --entrypoint /usr/bin/test -v newest-cni-457779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:43:35.763230 1121810 oci.go:107] Successfully prepared a docker volume newest-cni-457779
	I1208 01:43:35.763299 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.763314 1121810 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:43:35.763383 1121810 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:43:39.703637 1121810 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.940199388s)
	I1208 01:43:39.703674 1121810 kic.go:203] duration metric: took 3.940356568s to extract preloaded images to volume ...
	W1208 01:43:39.703822 1121810 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:43:39.703938 1121810 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:43:39.755193 1121810 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-457779 --name newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-457779 --network newest-cni-457779 --ip 192.168.76.2 --volume newest-cni-457779:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:43:40.098178 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Running}}
	I1208 01:43:40.139995 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.170386 1121810 cli_runner.go:164] Run: docker exec newest-cni-457779 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:43:40.229771 1121810 oci.go:144] the created container "newest-cni-457779" has a running status.
	I1208 01:43:40.229802 1121810 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa...
	I1208 01:43:40.697110 1121810 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:43:40.726929 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.749136 1121810 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:43:40.749165 1121810 kic_runner.go:114] Args: [docker exec --privileged newest-cni-457779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:43:40.813461 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.840725 1121810 machine.go:94] provisionDockerMachine start ...
	I1208 01:43:40.840842 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:40.867802 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:40.868157 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:40.868172 1121810 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:43:41.095667 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.095764 1121810 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:43:41.095876 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.120122 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.120469 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.120480 1121810 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:43:41.290623 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.290789 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.311253 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.311570 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.311587 1121810 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:43:41.483218 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:43:41.483251 1121810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:43:41.483283 1121810 ubuntu.go:190] setting up certificates
	I1208 01:43:41.483304 1121810 provision.go:84] configureAuth start
	I1208 01:43:41.483379 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:41.501594 1121810 provision.go:143] copyHostCerts
	I1208 01:43:41.501670 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:43:41.501684 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:43:41.501765 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:43:41.501870 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:43:41.501882 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:43:41.501911 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:43:41.501965 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:43:41.501974 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:43:41.501997 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:43:41.502054 1121810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:43:41.701737 1121810 provision.go:177] copyRemoteCerts
	I1208 01:43:41.701810 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:43:41.701853 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.719228 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:41.826667 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:43:41.845605 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:43:41.864446 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:43:41.883477 1121810 provision.go:87] duration metric: took 400.143683ms to configureAuth
	I1208 01:43:41.883508 1121810 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:43:41.883715 1121810 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:41.883727 1121810 machine.go:97] duration metric: took 1.042983827s to provisionDockerMachine
	I1208 01:43:41.883734 1121810 client.go:176] duration metric: took 6.811847736s to LocalClient.Create
	I1208 01:43:41.883755 1121810 start.go:167] duration metric: took 6.811927679s to libmachine.API.Create "newest-cni-457779"
	I1208 01:43:41.883766 1121810 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:43:41.883777 1121810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:43:41.883842 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:43:41.883884 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.901984 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.009332 1121810 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:43:42.014632 1121810 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:43:42.014671 1121810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:43:42.014684 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:43:42.014745 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:43:42.014838 1121810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:43:42.014945 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:43:42.027153 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:42.048521 1121810 start.go:296] duration metric: took 164.740218ms for postStartSetup
	I1208 01:43:42.048996 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.068188 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:42.068517 1121810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:43:42.068578 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.089135 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.196671 1121810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:43:42.202509 1121810 start.go:128] duration metric: took 7.134380905s to createHost
	I1208 01:43:42.202544 1121810 start.go:83] releasing machines lock for "newest-cni-457779", held for 7.134523987s
	I1208 01:43:42.202651 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.225469 1121810 ssh_runner.go:195] Run: cat /version.json
	I1208 01:43:42.225530 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.225555 1121810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:43:42.225620 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.248192 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.252198 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.354818 1121810 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:42.448479 1121810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:43:42.453374 1121810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:43:42.453474 1121810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:43:42.481420 1121810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:43:42.481454 1121810 start.go:496] detecting cgroup driver to use...
	I1208 01:43:42.481487 1121810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:43:42.481545 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:43:42.497315 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:43:42.510801 1121810 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:43:42.510908 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:43:42.528913 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:43:42.549245 1121810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:43:42.677688 1121810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:43:42.808025 1121810 docker.go:234] disabling docker service ...
	I1208 01:43:42.808134 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:43:42.829668 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:43:42.844784 1121810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:43:42.967423 1121810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:43:43.080509 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:43:43.099271 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:43:43.116361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:43:43.125920 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:43:43.135415 1121810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:43:43.135546 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:43:43.145049 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.154361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:43:43.163282 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.172992 1121810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:43:43.183456 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:43:43.192700 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:43:43.201918 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:43:43.211033 1121810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:43:43.218575 1121810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:43:43.226217 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.336148 1121810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:43:43.485946 1121810 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:43:43.486057 1121810 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:43:43.490114 1121810 start.go:564] Will wait 60s for crictl version
	I1208 01:43:43.490229 1121810 ssh_runner.go:195] Run: which crictl
	I1208 01:43:43.494026 1121810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:43:43.518236 1121810 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:43:43.518359 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.546503 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.572460 1121810 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:43:43.575475 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:43.591594 1121810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:43:43.595521 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.608351 1121810 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:43:43.611336 1121810 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:43:43.611494 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:43.611589 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.637041 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.637067 1121810 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:43:43.637131 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.663968 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.663994 1121810 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:43:43.664003 1121810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:43:43.664106 1121810 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:43:43.664184 1121810 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:43:43.690508 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:43.690535 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:43.690554 1121810 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:43:43.690578 1121810 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:43:43.690708 1121810 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:43:43.690785 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:43:43.698933 1121810 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:43:43.699056 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:43:43.707017 1121810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:43:43.720830 1121810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:43:43.734819 1121810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:43:43.748443 1121810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:43:43.752534 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.763093 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.892382 1121810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:43:43.909690 1121810 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:43:43.909718 1121810 certs.go:195] generating shared ca certs ...
	I1208 01:43:43.909736 1121810 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:43.909947 1121810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:43:43.910028 1121810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:43:43.910042 1121810 certs.go:257] generating profile certs ...
	I1208 01:43:43.910113 1121810 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:43:43.910132 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt with IP's: []
	I1208 01:43:44.271233 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt ...
	I1208 01:43:44.271267 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt: {Name:mka7ec1a9b348db295896c4fbe93c78f0eac2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271468 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key ...
	I1208 01:43:44.271482 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key: {Name:mkc310f4a570315e10c49516c56b2513b55aa651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271582 1121810 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:43:44.271600 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:43:44.830639 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 ...
	I1208 01:43:44.830674 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399: {Name:mk4dcde78303e922dc6fd9b0f86bb4a694f9ca60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830866 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 ...
	I1208 01:43:44.830881 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399: {Name:mk518276ee5546392f5eb2700a48869cb6431589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830967 1121810 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt
	I1208 01:43:44.831050 1121810 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key
	I1208 01:43:44.831119 1121810 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:43:44.831137 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt with IP's: []
	I1208 01:43:44.882804 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt ...
	I1208 01:43:44.882832 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt: {Name:mk7d1a29564431efb40b45d0c303e991b7f53000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883011 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key ...
	I1208 01:43:44.883025 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key: {Name:mk7882d520d12c1dd539975ac85c206b173a5dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883213 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:43:44.883262 1121810 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:43:44.883275 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:43:44.883314 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:43:44.883347 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:43:44.883379 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:43:44.883428 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:44.884017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:43:44.902791 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:43:44.921385 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:43:44.941017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:43:44.959449 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:43:44.976974 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:43:44.995099 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:43:45.050850 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:43:45.098831 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:43:45.141186 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:43:45.192133 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:43:45.239267 1121810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:43:45.262695 1121810 ssh_runner.go:195] Run: openssl version
	I1208 01:43:45.278120 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.292034 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:43:45.303687 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308166 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308244 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.359063 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.376429 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.385683 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.402964 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:43:45.419573 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426601 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426673 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.470891 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:43:45.478953 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:43:45.487067 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.495033 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:43:45.505252 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509187 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509254 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.550865 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:43:45.558694 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:43:45.566191 1121810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:43:45.569801 1121810 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:43:45.569857 1121810 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:45.569950 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:43:45.570007 1121810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:43:45.598900 1121810 cri.go:89] found id: ""
	I1208 01:43:45.598976 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:43:45.606734 1121810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:43:45.614639 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:43:45.614740 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:43:45.622525 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:43:45.622586 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:43:45.622667 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:43:45.630915 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:43:45.631002 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:43:45.638607 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:43:45.646509 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:43:45.646578 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:43:45.654866 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.663136 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:43:45.663229 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.670925 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:43:45.679164 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:43:45.679233 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:43:45.686793 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:43:45.726191 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:43:45.726671 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:43:45.801402 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:43:45.801539 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:43:45.801611 1121810 kubeadm.go:319] OS: Linux
	I1208 01:43:45.801690 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:43:45.801780 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:43:45.801861 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:43:45.801944 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:43:45.802035 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:43:45.802107 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:43:45.802179 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:43:45.802261 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:43:45.802332 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:43:45.874152 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:43:45.874329 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:43:45.874481 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:43:45.879465 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:43:45.885605 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:43:45.885775 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:43:45.885880 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:43:46.228184 1121810 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:43:46.789407 1121810 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:43:46.965778 1121810 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:43:47.194652 1121810 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:43:47.706685 1121810 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:43:47.707058 1121810 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:47.801474 1121810 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:43:47.801936 1121810 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:48.142552 1121810 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:43:48.263003 1121810 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:43:48.445660 1121810 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:43:48.445984 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:43:48.591329 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:43:49.028618 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:43:49.379863 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:43:49.569393 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:43:50.065560 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:43:50.066253 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:43:50.069265 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:43:50.072991 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:43:50.073097 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:43:50.073174 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:43:50.073707 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:43:50.092183 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:43:50.092359 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:43:50.100854 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:43:50.101429 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:43:50.101728 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:43:50.234928 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:43:50.235054 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:45:36.224252 1096912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202843s
	I1208 01:45:36.224297 1096912 kubeadm.go:319] 
	I1208 01:45:36.224376 1096912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:45:36.224412 1096912 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:45:36.224526 1096912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:45:36.224533 1096912 kubeadm.go:319] 
	I1208 01:45:36.224650 1096912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:45:36.224695 1096912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:45:36.224737 1096912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:45:36.224744 1096912 kubeadm.go:319] 
	I1208 01:45:36.229514 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:45:36.229948 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:45:36.230084 1096912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:45:36.230325 1096912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:45:36.230339 1096912 kubeadm.go:319] 
	I1208 01:45:36.230417 1096912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:45:36.230499 1096912 kubeadm.go:403] duration metric: took 8m6.827986586s to StartCluster
	I1208 01:45:36.230540 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:45:36.230607 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:45:36.261525 1096912 cri.go:89] found id: ""
	I1208 01:45:36.261550 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.261560 1096912 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:45:36.261567 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:45:36.261627 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:45:36.290275 1096912 cri.go:89] found id: ""
	I1208 01:45:36.290298 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.290307 1096912 logs.go:284] No container was found matching "etcd"
	I1208 01:45:36.290313 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:45:36.290373 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:45:36.317513 1096912 cri.go:89] found id: ""
	I1208 01:45:36.317543 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.317552 1096912 logs.go:284] No container was found matching "coredns"
	I1208 01:45:36.317559 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:45:36.317626 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:45:36.344794 1096912 cri.go:89] found id: ""
	I1208 01:45:36.344818 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.344827 1096912 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:45:36.344834 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:45:36.344896 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:45:36.374203 1096912 cri.go:89] found id: ""
	I1208 01:45:36.374231 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.374239 1096912 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:45:36.374246 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:45:36.374305 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:45:36.400248 1096912 cri.go:89] found id: ""
	I1208 01:45:36.400280 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.400291 1096912 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:45:36.400299 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:45:36.400360 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:45:36.425166 1096912 cri.go:89] found id: ""
	I1208 01:45:36.425190 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.425203 1096912 logs.go:284] No container was found matching "kindnet"
	I1208 01:45:36.425213 1096912 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:45:36.425226 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:45:36.489026 1096912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:45:36.489050 1096912 logs.go:123] Gathering logs for containerd ...
	I1208 01:45:36.489063 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:45:36.530813 1096912 logs.go:123] Gathering logs for container status ...
	I1208 01:45:36.530852 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:45:36.565171 1096912 logs.go:123] Gathering logs for kubelet ...
	I1208 01:45:36.565198 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:45:36.637319 1096912 logs.go:123] Gathering logs for dmesg ...
	I1208 01:45:36.637363 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:45:36.667302 1096912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:45:36.667423 1096912 out.go:285] * 
	W1208 01:45:36.667679 1096912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.667856 1096912 out.go:285] * 
	W1208 01:45:36.670553 1096912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:45:36.675751 1096912 out.go:203] 
	W1208 01:45:36.678675 1096912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.678951 1096912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:45:36.679016 1096912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:45:36.683776 1096912 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:37:18 no-preload-536520 containerd[758]: time="2025-12-08T01:37:18.211674218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.246725505Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.249013236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267094705Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267745122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.258971098Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.261089965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.269499475Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.270600866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.761254121Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.764004657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.776242422Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.786248661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.366835429Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.370354521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.377378157Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.378171320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.504878516Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.507125147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.516478294Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.517412037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.869447727Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.871885080Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.880678132Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.881176860Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:40.856995    5847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:40.857778    5847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:40.859435    5847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:40.859769    5847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:40.861291    5847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:45:40 up  6:28,  0 user,  load average: 0.53, 1.71, 2.08
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:45:37 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 kubelet[5600]: E1208 01:45:38.184879    5600 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:38 no-preload-536520 kubelet[5629]: E1208 01:45:38.946723    5629 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:38 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:39 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 08 01:45:39 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:39 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:39 no-preload-536520 kubelet[5728]: E1208 01:45:39.672456    5728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:39 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:39 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:45:40 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 08 01:45:40 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:40 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:45:40 no-preload-536520 kubelet[5763]: E1208 01:45:40.444049    5763 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:45:40 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:45:40 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 6 (410.207618ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:45:41.380095 1126240 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (102.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1208 01:46:28.221768  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.314715  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.321085  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.332543  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.353952  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.395464  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.476999  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.638639  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:46:59.960382  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:00.602336  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:01.883959  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:04.446568  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:09.568601  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:12.528207  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:19.810181  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.667075323s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-536520 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-536520 describe deploy/metrics-server -n kube-system: exit status 1 (55.83404ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-536520" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-536520 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1097222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:37:08.305644912Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d978b7ec933dfaa3a40373d30ab4c31d838283a17009d633c2f2575fe0d2fa01",
	            "SandboxKey": "/var/run/docker/netns/d978b7ec933d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:a5:95:c9:47:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "169f4c96797fb11af0a5eb9b81855033f528e1a5dc4666f7c0ac0ae34794695b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 6 (351.763337ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:47:22.474957 1128023 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ delete  │ -p old-k8s-version-895688                                                                                                                                                                                                                                  │ old-k8s-version-895688       │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:38 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:38 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable metrics-server -p embed-certs-719683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ stop    │ -p embed-certs-719683 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:43:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:43:34.815729 1121810 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:43:34.815855 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.815867 1121810 out.go:374] Setting ErrFile to fd 2...
	I1208 01:43:34.815872 1121810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:43:34.816138 1121810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:43:34.816580 1121810 out.go:368] Setting JSON to false
	I1208 01:43:34.817458 1121810 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23168,"bootTime":1765135047,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:43:34.817532 1121810 start.go:143] virtualization:  
	I1208 01:43:34.821576 1121810 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:43:34.825841 1121810 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:43:34.825977 1121810 notify.go:221] Checking for updates...
	I1208 01:43:34.832259 1121810 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:43:34.835279 1121810 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:43:34.841305 1121810 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:43:34.844728 1121810 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:43:34.847821 1121810 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:43:34.851422 1121810 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:34.851605 1121810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:43:34.879485 1121810 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:43:34.879731 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:34.965413 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:34.956183464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:34.965520 1121810 docker.go:319] overlay module found
	I1208 01:43:34.968791 1121810 out.go:179] * Using the docker driver based on user configuration
	I1208 01:43:34.971755 1121810 start.go:309] selected driver: docker
	I1208 01:43:34.971774 1121810 start.go:927] validating driver "docker" against <nil>
	I1208 01:43:34.971788 1121810 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:43:34.972547 1121810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:43:35.028561 1121810 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:43:35.017550524 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:43:35.028722 1121810 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1208 01:43:35.028754 1121810 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1208 01:43:35.029019 1121810 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:43:35.032208 1121810 out.go:179] * Using Docker driver with root privileges
	I1208 01:43:35.035049 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:35.035123 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:35.035136 1121810 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 01:43:35.035233 1121810 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:35.038520 1121810 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:43:35.041429 1121810 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:43:35.044577 1121810 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:43:35.047363 1121810 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:43:35.047458 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.047540 1121810 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:43:35.047549 1121810 cache.go:65] Caching tarball of preloaded images
	I1208 01:43:35.047628 1121810 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:43:35.047639 1121810 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:43:35.047753 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:35.047771 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json: {Name:mk01a58f99ac25ab3f8420cd37e5943e99ab0d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:35.067817 1121810 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:43:35.067841 1121810 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:43:35.067860 1121810 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:43:35.067891 1121810 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:43:35.068006 1121810 start.go:364] duration metric: took 93.999µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:43:35.068037 1121810 start.go:93] Provisioning new machine with config: &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:43:35.068112 1121810 start.go:125] createHost starting for "" (driver="docker")
	I1208 01:43:35.071522 1121810 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 01:43:35.071830 1121810 start.go:159] libmachine.API.Create for "newest-cni-457779" (driver="docker")
	I1208 01:43:35.071875 1121810 client.go:173] LocalClient.Create starting
	I1208 01:43:35.072009 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 01:43:35.072049 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072066 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072149 1121810 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 01:43:35.072168 1121810 main.go:143] libmachine: Decoding PEM data...
	I1208 01:43:35.072180 1121810 main.go:143] libmachine: Parsing certificate...
	I1208 01:43:35.072559 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 01:43:35.089783 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 01:43:35.089866 1121810 network_create.go:284] running [docker network inspect newest-cni-457779] to gather additional debugging logs...
	I1208 01:43:35.089892 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779
	W1208 01:43:35.107018 1121810 cli_runner.go:211] docker network inspect newest-cni-457779 returned with exit code 1
	I1208 01:43:35.107053 1121810 network_create.go:287] error running [docker network inspect newest-cni-457779]: docker network inspect newest-cni-457779: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-457779 not found
	I1208 01:43:35.107080 1121810 network_create.go:289] output of [docker network inspect newest-cni-457779]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-457779 not found
	
	** /stderr **
	I1208 01:43:35.107206 1121810 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:35.124993 1121810 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 01:43:35.125469 1121810 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 01:43:35.125932 1121810 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 01:43:35.126507 1121810 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a05be0}
	I1208 01:43:35.126532 1121810 network_create.go:124] attempt to create docker network newest-cni-457779 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 01:43:35.126598 1121810 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-457779 newest-cni-457779
	I1208 01:43:35.185488 1121810 network_create.go:108] docker network newest-cni-457779 192.168.76.0/24 created
	I1208 01:43:35.185521 1121810 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-457779" container
	I1208 01:43:35.185613 1121810 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 01:43:35.202705 1121810 cli_runner.go:164] Run: docker volume create newest-cni-457779 --label name.minikube.sigs.k8s.io=newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true
	I1208 01:43:35.222719 1121810 oci.go:103] Successfully created a docker volume newest-cni-457779
	I1208 01:43:35.222808 1121810 cli_runner.go:164] Run: docker run --rm --name newest-cni-457779-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --entrypoint /usr/bin/test -v newest-cni-457779:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 01:43:35.763230 1121810 oci.go:107] Successfully prepared a docker volume newest-cni-457779
	I1208 01:43:35.763299 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:35.763314 1121810 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 01:43:35.763383 1121810 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 01:43:39.703637 1121810 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-457779:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.940199388s)
	I1208 01:43:39.703674 1121810 kic.go:203] duration metric: took 3.940356568s to extract preloaded images to volume ...
	W1208 01:43:39.703822 1121810 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 01:43:39.703938 1121810 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 01:43:39.755193 1121810 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-457779 --name newest-cni-457779 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-457779 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-457779 --network newest-cni-457779 --ip 192.168.76.2 --volume newest-cni-457779:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 01:43:40.098178 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Running}}
	I1208 01:43:40.139995 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.170386 1121810 cli_runner.go:164] Run: docker exec newest-cni-457779 stat /var/lib/dpkg/alternatives/iptables
	I1208 01:43:40.229771 1121810 oci.go:144] the created container "newest-cni-457779" has a running status.
	I1208 01:43:40.229802 1121810 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa...
	I1208 01:43:40.697110 1121810 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 01:43:40.726929 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.749136 1121810 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 01:43:40.749165 1121810 kic_runner.go:114] Args: [docker exec --privileged newest-cni-457779 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 01:43:40.813461 1121810 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:43:40.840725 1121810 machine.go:94] provisionDockerMachine start ...
	I1208 01:43:40.840842 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:40.867802 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:40.868157 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:40.868172 1121810 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:43:41.095667 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.095764 1121810 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:43:41.095876 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.120122 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.120469 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.120480 1121810 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:43:41.290623 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:43:41.290789 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.311253 1121810 main.go:143] libmachine: Using SSH client type: native
	I1208 01:43:41.311570 1121810 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1208 01:43:41.311587 1121810 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:43:41.483218 1121810 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:43:41.483251 1121810 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:43:41.483283 1121810 ubuntu.go:190] setting up certificates
	I1208 01:43:41.483304 1121810 provision.go:84] configureAuth start
	I1208 01:43:41.483379 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:41.501594 1121810 provision.go:143] copyHostCerts
	I1208 01:43:41.501670 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:43:41.501684 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:43:41.501765 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:43:41.501870 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:43:41.501882 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:43:41.501911 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:43:41.501965 1121810 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:43:41.501974 1121810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:43:41.501997 1121810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:43:41.502054 1121810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:43:41.701737 1121810 provision.go:177] copyRemoteCerts
	I1208 01:43:41.701810 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:43:41.701853 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.719228 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:41.826667 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:43:41.845605 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:43:41.864446 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 01:43:41.883477 1121810 provision.go:87] duration metric: took 400.143683ms to configureAuth
	I1208 01:43:41.883508 1121810 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:43:41.883715 1121810 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:43:41.883727 1121810 machine.go:97] duration metric: took 1.042983827s to provisionDockerMachine
	I1208 01:43:41.883734 1121810 client.go:176] duration metric: took 6.811847736s to LocalClient.Create
	I1208 01:43:41.883755 1121810 start.go:167] duration metric: took 6.811927679s to libmachine.API.Create "newest-cni-457779"
	I1208 01:43:41.883766 1121810 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:43:41.883777 1121810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:43:41.883842 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:43:41.883884 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:41.901984 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.009332 1121810 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:43:42.014632 1121810 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:43:42.014671 1121810 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:43:42.014684 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:43:42.014745 1121810 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:43:42.014838 1121810 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:43:42.014945 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:43:42.027153 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:42.048521 1121810 start.go:296] duration metric: took 164.740218ms for postStartSetup
	I1208 01:43:42.048996 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.068188 1121810 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:43:42.068517 1121810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:43:42.068578 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.089135 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.196671 1121810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:43:42.202509 1121810 start.go:128] duration metric: took 7.134380905s to createHost
	I1208 01:43:42.202544 1121810 start.go:83] releasing machines lock for "newest-cni-457779", held for 7.134523987s
	I1208 01:43:42.202651 1121810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:43:42.225469 1121810 ssh_runner.go:195] Run: cat /version.json
	I1208 01:43:42.225530 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.225555 1121810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:43:42.225620 1121810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:43:42.248192 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.252198 1121810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:43:42.354818 1121810 ssh_runner.go:195] Run: systemctl --version
	I1208 01:43:42.448479 1121810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:43:42.453374 1121810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:43:42.453474 1121810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:43:42.481420 1121810 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 01:43:42.481454 1121810 start.go:496] detecting cgroup driver to use...
	I1208 01:43:42.481487 1121810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:43:42.481545 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:43:42.497315 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:43:42.510801 1121810 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:43:42.510908 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:43:42.528913 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:43:42.549245 1121810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:43:42.677688 1121810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:43:42.808025 1121810 docker.go:234] disabling docker service ...
	I1208 01:43:42.808134 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:43:42.829668 1121810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:43:42.844784 1121810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:43:42.967423 1121810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:43:43.080509 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:43:43.099271 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:43:43.116361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:43:43.125920 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:43:43.135415 1121810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:43:43.135546 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:43:43.145049 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.154361 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:43:43.163282 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:43:43.172992 1121810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:43:43.183456 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:43:43.192700 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:43:43.201918 1121810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:43:43.211033 1121810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:43:43.218575 1121810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:43:43.226217 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.336148 1121810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:43:43.485946 1121810 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:43:43.486057 1121810 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:43:43.490114 1121810 start.go:564] Will wait 60s for crictl version
	I1208 01:43:43.490229 1121810 ssh_runner.go:195] Run: which crictl
	I1208 01:43:43.494026 1121810 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:43:43.518236 1121810 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:43:43.518359 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.546503 1121810 ssh_runner.go:195] Run: containerd --version
	I1208 01:43:43.572460 1121810 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:43:43.575475 1121810 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:43:43.591594 1121810 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:43:43.595521 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.608351 1121810 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:43:43.611336 1121810 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:43:43.611494 1121810 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:43:43.611589 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.637041 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.637067 1121810 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:43:43.637131 1121810 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:43:43.663968 1121810 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:43:43.663994 1121810 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:43:43.664003 1121810 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:43:43.664106 1121810 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:43:43.664184 1121810 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:43:43.690508 1121810 cni.go:84] Creating CNI manager for ""
	I1208 01:43:43.690535 1121810 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:43:43.690554 1121810 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:43:43.690578 1121810 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:43:43.690708 1121810 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:43:43.690785 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:43:43.698933 1121810 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:43:43.699056 1121810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:43:43.707017 1121810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:43:43.720830 1121810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:43:43.734819 1121810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:43:43.748443 1121810 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:43:43.752534 1121810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:43:43.763093 1121810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:43:43.892382 1121810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:43:43.909690 1121810 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:43:43.909718 1121810 certs.go:195] generating shared ca certs ...
	I1208 01:43:43.909736 1121810 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:43.909947 1121810 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:43:43.910028 1121810 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:43:43.910042 1121810 certs.go:257] generating profile certs ...
	I1208 01:43:43.910113 1121810 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:43:43.910132 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt with IP's: []
	I1208 01:43:44.271233 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt ...
	I1208 01:43:44.271267 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.crt: {Name:mka7ec1a9b348db295896c4fbe93c78f0eac2f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271468 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key ...
	I1208 01:43:44.271482 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key: {Name:mkc310f4a570315e10c49516c56b2513b55aa651 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.271582 1121810 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:43:44.271600 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 01:43:44.830639 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 ...
	I1208 01:43:44.830674 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399: {Name:mk4dcde78303e922dc6fd9b0f86bb4a694f9ca60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830866 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 ...
	I1208 01:43:44.830881 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399: {Name:mk518276ee5546392f5eb2700a48869cb6431589 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.830967 1121810 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt
	I1208 01:43:44.831050 1121810 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key
	I1208 01:43:44.831119 1121810 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:43:44.831137 1121810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt with IP's: []
	I1208 01:43:44.882804 1121810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt ...
	I1208 01:43:44.882832 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt: {Name:mk7d1a29564431efb40b45d0c303e991b7f53000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883011 1121810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key ...
	I1208 01:43:44.883025 1121810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key: {Name:mk7882d520d12c1dd539975ac85c206b173a5dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:43:44.883213 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:43:44.883262 1121810 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:43:44.883275 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:43:44.883314 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:43:44.883347 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:43:44.883379 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:43:44.883428 1121810 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:43:44.884017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:43:44.902791 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:43:44.921385 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:43:44.941017 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:43:44.959449 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:43:44.976974 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:43:44.995099 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:43:45.050850 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:43:45.098831 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:43:45.141186 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:43:45.192133 1121810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:43:45.239267 1121810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:43:45.262695 1121810 ssh_runner.go:195] Run: openssl version
	I1208 01:43:45.278120 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.292034 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:43:45.303687 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308166 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.308244 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:43:45.359063 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.376429 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 01:43:45.385683 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.402964 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:43:45.419573 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426601 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.426673 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:43:45.470891 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:43:45.478953 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 01:43:45.487067 1121810 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.495033 1121810 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:43:45.505252 1121810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509187 1121810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.509254 1121810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:43:45.550865 1121810 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:43:45.558694 1121810 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 01:43:45.566191 1121810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:43:45.569801 1121810 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 01:43:45.569857 1121810 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:43:45.569950 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:43:45.570007 1121810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:43:45.598900 1121810 cri.go:89] found id: ""
	I1208 01:43:45.598976 1121810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:43:45.606734 1121810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 01:43:45.614639 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:43:45.614740 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:43:45.622525 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:43:45.622586 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:43:45.622667 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:43:45.630915 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:43:45.631002 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:43:45.638607 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:43:45.646509 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:43:45.646578 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:43:45.654866 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.663136 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:43:45.663229 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:43:45.670925 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:43:45.679164 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:43:45.679233 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:43:45.686793 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:43:45.726191 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:43:45.726671 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:43:45.801402 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:43:45.801539 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:43:45.801611 1121810 kubeadm.go:319] OS: Linux
	I1208 01:43:45.801690 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:43:45.801780 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:43:45.801861 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:43:45.801944 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:43:45.802035 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:43:45.802107 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:43:45.802179 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:43:45.802261 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:43:45.802332 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:43:45.874152 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:43:45.874329 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:43:45.874481 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:43:45.879465 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:43:45.885605 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:43:45.885775 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:43:45.885880 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:43:46.228184 1121810 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 01:43:46.789407 1121810 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 01:43:46.965778 1121810 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 01:43:47.194652 1121810 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 01:43:47.706685 1121810 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 01:43:47.707058 1121810 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:47.801474 1121810 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 01:43:47.801936 1121810 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 01:43:48.142552 1121810 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 01:43:48.263003 1121810 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 01:43:48.445660 1121810 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 01:43:48.445984 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:43:48.591329 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:43:49.028618 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:43:49.379863 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:43:49.569393 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:43:50.065560 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:43:50.066253 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:43:50.069265 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:43:50.072991 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:43:50.073097 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:43:50.073174 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:43:50.073707 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:43:50.092183 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:43:50.092359 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:43:50.100854 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:43:50.101429 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:43:50.101728 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:43:50.234928 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:43:50.235054 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:45:36.224252 1096912 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001202843s
	I1208 01:45:36.224297 1096912 kubeadm.go:319] 
	I1208 01:45:36.224376 1096912 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:45:36.224412 1096912 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:45:36.224526 1096912 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:45:36.224533 1096912 kubeadm.go:319] 
	I1208 01:45:36.224650 1096912 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:45:36.224695 1096912 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:45:36.224737 1096912 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:45:36.224744 1096912 kubeadm.go:319] 
	I1208 01:45:36.229514 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:45:36.229948 1096912 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:45:36.230084 1096912 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:45:36.230325 1096912 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:45:36.230339 1096912 kubeadm.go:319] 
	I1208 01:45:36.230417 1096912 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:45:36.230499 1096912 kubeadm.go:403] duration metric: took 8m6.827986586s to StartCluster
	I1208 01:45:36.230540 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:45:36.230607 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:45:36.261525 1096912 cri.go:89] found id: ""
	I1208 01:45:36.261550 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.261560 1096912 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:45:36.261567 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:45:36.261627 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:45:36.290275 1096912 cri.go:89] found id: ""
	I1208 01:45:36.290298 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.290307 1096912 logs.go:284] No container was found matching "etcd"
	I1208 01:45:36.290313 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:45:36.290373 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:45:36.317513 1096912 cri.go:89] found id: ""
	I1208 01:45:36.317543 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.317552 1096912 logs.go:284] No container was found matching "coredns"
	I1208 01:45:36.317559 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:45:36.317626 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:45:36.344794 1096912 cri.go:89] found id: ""
	I1208 01:45:36.344818 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.344827 1096912 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:45:36.344834 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:45:36.344896 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:45:36.374203 1096912 cri.go:89] found id: ""
	I1208 01:45:36.374231 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.374239 1096912 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:45:36.374246 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:45:36.374305 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:45:36.400248 1096912 cri.go:89] found id: ""
	I1208 01:45:36.400280 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.400291 1096912 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:45:36.400299 1096912 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:45:36.400360 1096912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:45:36.425166 1096912 cri.go:89] found id: ""
	I1208 01:45:36.425190 1096912 logs.go:282] 0 containers: []
	W1208 01:45:36.425203 1096912 logs.go:284] No container was found matching "kindnet"
	I1208 01:45:36.425213 1096912 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:45:36.425226 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:45:36.489026 1096912 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:45:36.480716    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.481266    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.482919    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.483460    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:45:36.485045    5452 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:45:36.489050 1096912 logs.go:123] Gathering logs for containerd ...
	I1208 01:45:36.489063 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:45:36.530813 1096912 logs.go:123] Gathering logs for container status ...
	I1208 01:45:36.530852 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:45:36.565171 1096912 logs.go:123] Gathering logs for kubelet ...
	I1208 01:45:36.565198 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:45:36.637319 1096912 logs.go:123] Gathering logs for dmesg ...
	I1208 01:45:36.637363 1096912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1208 01:45:36.667302 1096912 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:45:36.667423 1096912 out.go:285] * 
	W1208 01:45:36.667679 1096912 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.667856 1096912 out.go:285] * 
	W1208 01:45:36.670553 1096912 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:45:36.675751 1096912 out.go:203] 
	W1208 01:45:36.678675 1096912 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001202843s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:45:36.678951 1096912 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:45:36.679016 1096912 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:45:36.683776 1096912 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:37:18 no-preload-536520 containerd[758]: time="2025-12-08T01:37:18.211674218Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.246725505Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.249013236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267094705Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:19 no-preload-536520 containerd[758]: time="2025-12-08T01:37:19.267745122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.258971098Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.261089965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.269499475Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:20 no-preload-536520 containerd[758]: time="2025-12-08T01:37:20.270600866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.761254121Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.764004657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.776242422Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:21 no-preload-536520 containerd[758]: time="2025-12-08T01:37:21.786248661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.366835429Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.370354521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.377378157Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:23 no-preload-536520 containerd[758]: time="2025-12-08T01:37:23.378171320Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.504878516Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.507125147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.516478294Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.517412037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.869447727Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.871885080Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.880678132Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 08 01:37:25 no-preload-536520 containerd[758]: time="2025-12-08T01:37:25.881176860Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:47:23.123961    6864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:47:23.124967    6864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:47:23.126817    6864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:47:23.127444    6864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:47:23.129602    6864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:47:23 up  6:29,  0 user,  load average: 0.87, 1.44, 1.94
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:47:20 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:47:20 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 459.
	Dec 08 01:47:20 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:20 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:20 no-preload-536520 kubelet[6743]: E1208 01:47:20.897856    6743 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:47:20 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:47:20 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:47:21 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 460.
	Dec 08 01:47:21 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:21 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:21 no-preload-536520 kubelet[6749]: E1208 01:47:21.653336    6749 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:47:21 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:47:21 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:47:22 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 461.
	Dec 08 01:47:22 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:22 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:22 no-preload-536520 kubelet[6769]: E1208 01:47:22.435675    6769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:47:22 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:47:22 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:47:23 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 462.
	Dec 08 01:47:23 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:23 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:47:23 no-preload-536520 kubelet[6869]: E1208 01:47:23.193333    6869 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:47:23 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:47:23 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 6 (333.216964ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:47:23.589548 1128259 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (102.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1208 01:47:40.239248  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:47:40.291807  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:48:21.253280  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:25.030935  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:30.127932  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:31.302025  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:49:43.174698  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:51:28.221324  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.014314886s)

                                                
                                                
-- stdout --
	* [no-preload-536520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-536520" primary control-plane node in "no-preload-536520" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:47:25.144534 1128548 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:47:25.144678 1128548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:47:25.144689 1128548 out.go:374] Setting ErrFile to fd 2...
	I1208 01:47:25.144694 1128548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:47:25.144937 1128548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:47:25.145300 1128548 out.go:368] Setting JSON to false
	I1208 01:47:25.146183 1128548 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23398,"bootTime":1765135047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:47:25.146250 1128548 start.go:143] virtualization:  
	I1208 01:47:25.149287 1128548 out.go:179] * [no-preload-536520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:47:25.152995 1128548 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:47:25.153086 1128548 notify.go:221] Checking for updates...
	I1208 01:47:25.159060 1128548 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:47:25.162001 1128548 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:25.164903 1128548 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:47:25.167734 1128548 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:47:25.170597 1128548 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:47:25.173940 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:25.174644 1128548 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:47:25.196090 1128548 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:47:25.196210 1128548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:47:25.265905 1128548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:47:25.255621287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:47:25.266017 1128548 docker.go:319] overlay module found
	I1208 01:47:25.271179 1128548 out.go:179] * Using the docker driver based on existing profile
	I1208 01:47:25.274140 1128548 start.go:309] selected driver: docker
	I1208 01:47:25.274161 1128548 start.go:927] validating driver "docker" against &{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:25.274269 1128548 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:47:25.275138 1128548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:47:25.329239 1128548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:47:25.319858232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:47:25.329577 1128548 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:47:25.329613 1128548 cni.go:84] Creating CNI manager for ""
	I1208 01:47:25.329682 1128548 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:47:25.329727 1128548 start.go:353] cluster config:
	{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:25.332831 1128548 out.go:179] * Starting "no-preload-536520" primary control-plane node in "no-preload-536520" cluster
	I1208 01:47:25.335606 1128548 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:47:25.338488 1128548 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:47:25.341508 1128548 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:47:25.341637 1128548 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:47:25.341965 1128548 cache.go:107] acquiring lock: {Name:mk26e7e88ac6993c5141f2d02121dfa2fc547fd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342041 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:47:25.342050 1128548 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.869µs
	I1208 01:47:25.342063 1128548 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:47:25.342075 1128548 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:47:25.342168 1128548 cache.go:107] acquiring lock: {Name:mk0f1b4d6e089d68a7c2b058d311e225652853b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342209 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:47:25.342215 1128548 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 51.192µs
	I1208 01:47:25.342221 1128548 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342231 1128548 cache.go:107] acquiring lock: {Name:mk597bd9b4cd05f2d1a0093859d8b23b8ea1cd1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342263 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:47:25.342268 1128548 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.556µs
	I1208 01:47:25.342274 1128548 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342284 1128548 cache.go:107] acquiring lock: {Name:mka22e7ada81429241ca2443bce21a3f31b8eb66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342310 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:47:25.342315 1128548 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.83µs
	I1208 01:47:25.342323 1128548 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342334 1128548 cache.go:107] acquiring lock: {Name:mkfea4ee3c261ad6c1d7efee63fc672216a4c310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342372 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:47:25.342381 1128548 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.246µs
	I1208 01:47:25.342387 1128548 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342398 1128548 cache.go:107] acquiring lock: {Name:mk8813c8ba18f703b4246d4ffd8656e53b0f2ec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342424 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:47:25.342429 1128548 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 32µs
	I1208 01:47:25.342434 1128548 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:47:25.342566 1128548 cache.go:107] acquiring lock: {Name:mk98329aaba04bc9ea4839996e52989df0918014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342601 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:47:25.342606 1128548 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 163.997µs
	I1208 01:47:25.342654 1128548 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:47:25.342670 1128548 cache.go:107] acquiring lock: {Name:mk58db1a89606bc77924fd68a726167dcd840a38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342704 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:47:25.342709 1128548 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 41.297µs
	I1208 01:47:25.342715 1128548 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:47:25.342736 1128548 cache.go:87] Successfully saved all images to host disk.
	I1208 01:47:25.366899 1128548 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:47:25.366925 1128548 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:47:25.366941 1128548 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:47:25.366970 1128548 start.go:360] acquireMachinesLock for no-preload-536520: {Name:mkcfe59c9f9ccdd77be288a5dfb4e3b57f6ad839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.367026 1128548 start.go:364] duration metric: took 36.948µs to acquireMachinesLock for "no-preload-536520"
	I1208 01:47:25.367050 1128548 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:47:25.367061 1128548 fix.go:54] fixHost starting: 
	I1208 01:47:25.367326 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:25.386712 1128548 fix.go:112] recreateIfNeeded on no-preload-536520: state=Stopped err=<nil>
	W1208 01:47:25.386739 1128548 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:47:25.390105 1128548 out.go:252] * Restarting existing docker container for "no-preload-536520" ...
	I1208 01:47:25.390183 1128548 cli_runner.go:164] Run: docker start no-preload-536520
	I1208 01:47:25.647203 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:25.672717 1128548 kic.go:430] container "no-preload-536520" state is running.
	I1208 01:47:25.673659 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:25.696605 1128548 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:47:25.696850 1128548 machine.go:94] provisionDockerMachine start ...
	I1208 01:47:25.696917 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:25.719881 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:25.720251 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:25.720267 1128548 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:47:25.721006 1128548 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:47:28.871078 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:47:28.871100 1128548 ubuntu.go:182] provisioning hostname "no-preload-536520"
	I1208 01:47:28.871170 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:28.890322 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:28.890666 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:28.890685 1128548 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-536520 && echo "no-preload-536520" | sudo tee /etc/hostname
	I1208 01:47:29.051907 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:47:29.051990 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.071065 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:29.071389 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:29.071409 1128548 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-536520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-536520/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-536520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:47:29.230899 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:47:29.230991 1128548 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:47:29.231044 1128548 ubuntu.go:190] setting up certificates
	I1208 01:47:29.231075 1128548 provision.go:84] configureAuth start
	I1208 01:47:29.231167 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:29.253696 1128548 provision.go:143] copyHostCerts
	I1208 01:47:29.253770 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:47:29.253785 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:47:29.253863 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:47:29.253977 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:47:29.253988 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:47:29.254015 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:47:29.254069 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:47:29.254079 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:47:29.254103 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:47:29.254158 1128548 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.no-preload-536520 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-536520]
	I1208 01:47:29.311134 1128548 provision.go:177] copyRemoteCerts
	I1208 01:47:29.311266 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:47:29.311345 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.328954 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.438688 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:47:29.457711 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:47:29.476430 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:47:29.494899 1128548 provision.go:87] duration metric: took 263.788812ms to configureAuth
	I1208 01:47:29.494927 1128548 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:47:29.495124 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:29.495136 1128548 machine.go:97] duration metric: took 3.798277669s to provisionDockerMachine
	I1208 01:47:29.495144 1128548 start.go:293] postStartSetup for "no-preload-536520" (driver="docker")
	I1208 01:47:29.495155 1128548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:47:29.495213 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:47:29.495257 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.514045 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.618827 1128548 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:47:29.622435 1128548 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:47:29.622486 1128548 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:47:29.622498 1128548 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:47:29.622555 1128548 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:47:29.622644 1128548 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:47:29.622753 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:47:29.630547 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:47:29.648755 1128548 start.go:296] duration metric: took 153.595239ms for postStartSetup
	I1208 01:47:29.648836 1128548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:47:29.648887 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.666163 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.767705 1128548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:47:29.772600 1128548 fix.go:56] duration metric: took 4.405531603s for fixHost
	I1208 01:47:29.772626 1128548 start.go:83] releasing machines lock for "no-preload-536520", held for 4.405586815s
	I1208 01:47:29.772706 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:29.789841 1128548 ssh_runner.go:195] Run: cat /version.json
	I1208 01:47:29.789937 1128548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:47:29.789945 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.789996 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.812717 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.816251 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:30.016925 1128548 ssh_runner.go:195] Run: systemctl --version
	I1208 01:47:30.052830 1128548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:47:30.068491 1128548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:47:30.068623 1128548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:47:30.091127 1128548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:47:30.091213 1128548 start.go:496] detecting cgroup driver to use...
	I1208 01:47:30.092843 1128548 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:47:30.092996 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:47:30.118949 1128548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:47:30.135926 1128548 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:47:30.136004 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:47:30.154610 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:47:30.169484 1128548 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:47:30.284887 1128548 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:47:30.401821 1128548 docker.go:234] disabling docker service ...
	I1208 01:47:30.401907 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:47:30.417825 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:47:30.431329 1128548 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:47:30.549687 1128548 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:47:30.701450 1128548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:47:30.714761 1128548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:47:30.729964 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:47:30.740571 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:47:30.749764 1128548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:47:30.749912 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:47:30.759343 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:47:30.768528 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:47:30.777543 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:47:30.786277 1128548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:47:30.795137 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:47:30.804067 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:47:30.812868 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:47:30.821824 1128548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:47:30.829778 1128548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:47:30.837363 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:30.946320 1128548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:47:31.048379 1128548 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:47:31.048491 1128548 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:47:31.052641 1128548 start.go:564] Will wait 60s for crictl version
	I1208 01:47:31.052748 1128548 ssh_runner.go:195] Run: which crictl
	I1208 01:47:31.056733 1128548 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:47:31.082010 1128548 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:47:31.082131 1128548 ssh_runner.go:195] Run: containerd --version
	I1208 01:47:31.107752 1128548 ssh_runner.go:195] Run: containerd --version
	I1208 01:47:31.134903 1128548 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:47:31.137907 1128548 cli_runner.go:164] Run: docker network inspect no-preload-536520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:47:31.155436 1128548 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:47:31.159751 1128548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:47:31.170770 1128548 kubeadm.go:884] updating cluster {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:47:31.170896 1128548 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:47:31.170961 1128548 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:47:31.196822 1128548 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:47:31.196850 1128548 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:47:31.196858 1128548 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:47:31.196959 1128548 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-536520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:47:31.197036 1128548 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:47:31.226544 1128548 cni.go:84] Creating CNI manager for ""
	I1208 01:47:31.226567 1128548 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:47:31.226586 1128548 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:47:31.226628 1128548 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-536520 NodeName:no-preload-536520 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:47:31.226797 1128548 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-536520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:47:31.226877 1128548 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:47:31.234989 1128548 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:47:31.235078 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:47:31.242869 1128548 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:47:31.261281 1128548 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:47:31.274779 1128548 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 01:47:31.288252 1128548 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:47:31.292107 1128548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:47:31.302333 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:31.430216 1128548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:47:31.454999 1128548 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520 for IP: 192.168.85.2
	I1208 01:47:31.455023 1128548 certs.go:195] generating shared ca certs ...
	I1208 01:47:31.455040 1128548 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:31.455242 1128548 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:47:31.455311 1128548 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:47:31.455324 1128548 certs.go:257] generating profile certs ...
	I1208 01:47:31.456430 1128548 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.key
	I1208 01:47:31.456527 1128548 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035
	I1208 01:47:31.456618 1128548 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key
	I1208 01:47:31.456780 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:47:31.456840 1128548 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:47:31.456857 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:47:31.456908 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:47:31.457070 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:47:31.457132 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:47:31.457218 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:47:31.457887 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:47:31.480298 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:47:31.500772 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:47:31.521065 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:47:31.540174 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:47:31.558342 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:47:31.576697 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:47:31.595322 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:47:31.613726 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:47:31.631953 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:47:31.650290 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:47:31.669394 1128548 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:47:31.682263 1128548 ssh_runner.go:195] Run: openssl version
	I1208 01:47:31.688781 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.696556 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:47:31.704170 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.707971 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.708038 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.749348 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:47:31.756878 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.764251 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:47:31.771591 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.775408 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.775525 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.817721 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:47:31.825386 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.832732 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:47:31.840622 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.844445 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.844541 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.885480 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:47:31.892792 1128548 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:47:31.896494 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:47:31.937567 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:47:31.978872 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:47:32.020623 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:47:32.062625 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:47:32.104715 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:47:32.148960 1128548 kubeadm.go:401] StartCluster: {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:32.149129 1128548 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:47:32.149249 1128548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:47:32.177182 1128548 cri.go:89] found id: ""
	I1208 01:47:32.177302 1128548 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:47:32.185467 1128548 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:47:32.185537 1128548 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:47:32.185605 1128548 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:47:32.193198 1128548 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:47:32.193631 1128548 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:32.193734 1128548 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-536520" cluster setting kubeconfig missing "no-preload-536520" context setting]
	I1208 01:47:32.194046 1128548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.195334 1128548 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:47:32.203280 1128548 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:47:32.203313 1128548 kubeadm.go:602] duration metric: took 17.762571ms to restartPrimaryControlPlane
	I1208 01:47:32.203323 1128548 kubeadm.go:403] duration metric: took 54.376484ms to StartCluster
	I1208 01:47:32.203356 1128548 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.203428 1128548 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:32.204022 1128548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.204232 1128548 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:47:32.204520 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:32.204571 1128548 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:47:32.204638 1128548 addons.go:70] Setting storage-provisioner=true in profile "no-preload-536520"
	I1208 01:47:32.204677 1128548 addons.go:239] Setting addon storage-provisioner=true in "no-preload-536520"
	I1208 01:47:32.204699 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.205163 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.205504 1128548 addons.go:70] Setting default-storageclass=true in profile "no-preload-536520"
	I1208 01:47:32.205524 1128548 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-536520"
	I1208 01:47:32.205784 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.205965 1128548 addons.go:70] Setting dashboard=true in profile "no-preload-536520"
	I1208 01:47:32.205979 1128548 addons.go:239] Setting addon dashboard=true in "no-preload-536520"
	W1208 01:47:32.205986 1128548 addons.go:248] addon dashboard should already be in state true
	I1208 01:47:32.206006 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.206434 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.211155 1128548 out.go:179] * Verifying Kubernetes components...
	I1208 01:47:32.214232 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:32.243438 1128548 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:47:32.247673 1128548 addons.go:239] Setting addon default-storageclass=true in "no-preload-536520"
	I1208 01:47:32.247722 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.248146 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.248283 1128548 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:47:32.249393 1128548 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:47:32.249422 1128548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:47:32.249479 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.254001 1128548 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:47:32.258300 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:47:32.258329 1128548 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:47:32.258396 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.286961 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.294498 1128548 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:32.294541 1128548 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:47:32.294621 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.308511 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.336182 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.455434 1128548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:47:32.503421 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:47:32.512052 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:47:32.512091 1128548 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:47:32.529651 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:47:32.529679 1128548 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:47:32.538583 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:32.564793 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:47:32.564828 1128548 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:47:32.600166 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:47:32.600234 1128548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:47:32.620391 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:47:32.620414 1128548 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:47:32.635342 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:47:32.635368 1128548 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:47:32.649355 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:47:32.649379 1128548 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:47:32.663687 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:47:32.663714 1128548 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:47:32.677279 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:32.677303 1128548 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:47:32.690968 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:33.092759 1128548 node_ready.go:35] waiting up to 6m0s for node "no-preload-536520" to be "Ready" ...
	W1208 01:47:33.093211 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093277 1128548 retry.go:31] will retry after 135.377583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.093354 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093401 1128548 retry.go:31] will retry after 356.085059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.093693 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093739 1128548 retry.go:31] will retry after 290.352829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.229413 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:33.295712 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.295753 1128548 retry.go:31] will retry after 504.528201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.385144 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:33.450468 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:33.455498 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.455579 1128548 retry.go:31] will retry after 210.308534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.513454 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.513515 1128548 retry.go:31] will retry after 261.594769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.666341 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:33.730275 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.730367 1128548 retry.go:31] will retry after 515.285755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.775591 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:33.801214 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:33.874343 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.874382 1128548 retry.go:31] will retry after 373.513153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.879699 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.879734 1128548 retry.go:31] will retry after 640.492075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.246844 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:34.248194 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:34.323597 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.323634 1128548 retry.go:31] will retry after 1.019529809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:34.339043 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.339089 1128548 retry.go:31] will retry after 1.209309516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.520466 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:34.578496 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.578579 1128548 retry.go:31] will retry after 641.799617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:35.094210 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:35.220879 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:35.291769 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.291804 1128548 retry.go:31] will retry after 1.824974972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.343984 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:35.420724 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.420761 1128548 retry.go:31] will retry after 1.505282353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.548906 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:35.619439 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.619474 1128548 retry.go:31] will retry after 1.475994436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:36.927068 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:36.990909 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:36.990950 1128548 retry.go:31] will retry after 1.384042047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.095678 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:37.117700 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:37.162835 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.162944 1128548 retry.go:31] will retry after 2.706380277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:37.196118 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.196155 1128548 retry.go:31] will retry after 2.546989667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:37.593980 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:38.375563 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:38.448945 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:38.448983 1128548 retry.go:31] will retry after 4.228344134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.743778 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:39.805386 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.805423 1128548 retry.go:31] will retry after 1.941295739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.869521 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:39.940830 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.940861 1128548 retry.go:31] will retry after 1.677329859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:40.093478 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:41.618647 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:41.680452 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.680482 1128548 retry.go:31] will retry after 3.415857651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.747716 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:41.810998 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.811032 1128548 retry.go:31] will retry after 4.001958095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:42.593808 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:42.678234 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:42.741215 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:42.741256 1128548 retry.go:31] will retry after 4.935696048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:45.093503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:45.096924 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:45.182970 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.183006 1128548 retry.go:31] will retry after 5.169461339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.814151 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:45.895671 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.895706 1128548 retry.go:31] will retry after 6.069460108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:47.593429 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:47.677825 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:47.741017 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:47.741053 1128548 retry.go:31] will retry after 6.358930969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:50.093389 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:50.353033 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:50.436848 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:50.436881 1128548 retry.go:31] will retry after 13.13295311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:51.966065 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:52.046307 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:52.046338 1128548 retry.go:31] will retry after 13.071324249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:52.093819 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:47:54.094117 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:54.100443 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:54.199978 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:54.200013 1128548 retry.go:31] will retry after 5.921409717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:56.593485 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:47:58.594917 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:00.169117 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:00.465737 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:00.465774 1128548 retry.go:31] will retry after 20.435782348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:01.093648 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:03.570310 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:03.593451 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:03.632880 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:03.632912 1128548 retry.go:31] will retry after 21.217435615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:05.117897 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:05.178316 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:05.178352 1128548 retry.go:31] will retry after 19.478477459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:05.594138 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:08.093454 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:10.593499 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:13.093411 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:15.093543 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:17.593421 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:20.093502 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:20.902007 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:20.961724 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:20.961758 1128548 retry.go:31] will retry after 19.271074882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:22.593561 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:24.657892 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:24.716774 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.716812 1128548 retry.go:31] will retry after 21.882989692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.851274 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:24.908152 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.908186 1128548 retry.go:31] will retry after 13.56417867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:25.093911 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:27.593411 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:29.594271 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:32.093606 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:34.094178 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:36.593455 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:38.472558 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:38.531541 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:38.531575 1128548 retry.go:31] will retry after 35.735118355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:38.593962 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:40.233963 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:40.295686 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:40.295723 1128548 retry.go:31] will retry after 24.954393837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:41.093636 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:43.094034 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:45.593528 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:46.601180 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:46.663105 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:46.663141 1128548 retry.go:31] will retry after 26.276311259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:47.594156 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:50.093521 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:52.593610 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:55.093548 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:57.094350 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:59.593344 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:01.593410 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:03.594090 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:49:05.250844 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:49:05.313013 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:05.313123 1128548 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:49:06.093980 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:08.094120 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:10.094394 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:12.593567 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:49:12.940260 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:49:12.998388 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:12.998515 1128548 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:49:14.267569 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:49:14.332970 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:14.333076 1128548 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:49:14.337901 1128548 out.go:179] * Enabled addons: 
	I1208 01:49:14.340893 1128548 addons.go:530] duration metric: took 1m42.136312022s for enable addons: enabled=[]
	W1208 01:49:14.593648 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:17.093978 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:19.094217 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:21.593630 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:24.093459 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:26.094330 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:28.593722 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:31.093470 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:33.593451 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:36.093495 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:38.593385 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:40.593647 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:43.093371 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:45.094369 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:47.593619 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:50.093597 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:52.093757 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:54.094300 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:56.593367 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:58.593542 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:00.593897 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:03.093529 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:05.593439 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:07.593566 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:10.093546 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:12.595550 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:15.093478 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:17.593462 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:20.093365 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:22.093598 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:24.094259 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:26.094930 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:28.594300 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:31.093474 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:33.593407 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:35.593599 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:38.093412 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:40.094354 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:42.593822 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:44.594150 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:47.093329 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:49.093541 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:51.593381 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:53.593623 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:56.093365 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:58.093467 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:00.093554 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:02.094325 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:04.593608 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:06.594220 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:09.093557 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:11.593588 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:13.594210 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:16.093513 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:18.593414 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:20.593693 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:22.594088 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:24.594405 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:27.093352 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:29.094118 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:31.593494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:33.605055 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:36.093573 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:38.593473 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:40.593558 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:43.093868 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:45.094213 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:47.593628 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:49.594227 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:52.093569 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:54.093956 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:56.593503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:59.094349 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:01.594084 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:04.093361 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:06.093681 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:08.593337 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:10.593588 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:13.093342 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:15.593454 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:18.093494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:20.094567 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:22.593792 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:24.594162 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:26.594200 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:29.093494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:31.093534 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:33.094175 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:35.600716 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:38.093539 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:40.593468 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:42.593754 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:45.093592 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:47.593331 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:49.593373 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:51.593536 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:53.593727 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:55.594095 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:58.093472 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:00.094264 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:02.593473 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:04.593600 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:07.094541 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:09.593597 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:11.594238 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:14.093490 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:16.093667 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:18.593503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:20.594510 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:23.093455 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:25.593347 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:27.593483 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:30.093460 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:32.593363 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:33.093099 1128548 node_ready.go:38] duration metric: took 6m0.00024354s for node "no-preload-536520" to be "Ready" ...
	I1208 01:53:33.096356 1128548 out.go:203] 
	W1208 01:53:33.099424 1128548 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:53:33.099449 1128548 out.go:285] * 
	* 
	W1208 01:53:33.101601 1128548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:53:33.103637 1128548 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1128684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:47:25.421292194Z",
	            "FinishedAt": "2025-12-08T01:47:24.077520836Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "508635803fd26385f5b74c49f258f541cf3f3701572a3e277063698fd55748b0",
	            "SandboxKey": "/var/run/docker/netns/508635803fd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b7:e8:6e:2b:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "662425aa0da883d43861485458a7d96ef656064827e7d2e8fc052d0ab70deda4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 2 (430.326193ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-536520 logs -n 25: (1.067749637s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p no-preload-536520 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ start   │ -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:51 UTC │                     │
	│ stop    │ -p newest-cni-457779 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-457779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:53:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:53:26.756000 1136586 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:53:26.756538 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756548 1136586 out.go:374] Setting ErrFile to fd 2...
	I1208 01:53:26.756553 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756842 1136586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:53:26.757268 1136586 out.go:368] Setting JSON to false
	I1208 01:53:26.758219 1136586 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23760,"bootTime":1765135047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:53:26.758285 1136586 start.go:143] virtualization:  
	I1208 01:53:26.761027 1136586 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:53:26.763300 1136586 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:53:26.763385 1136586 notify.go:221] Checking for updates...
	I1208 01:53:26.769236 1136586 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:53:26.772301 1136586 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:26.775351 1136586 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:53:26.778370 1136586 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:53:26.781331 1136586 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:53:26.784939 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:26.785587 1136586 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:53:26.821497 1136586 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:53:26.821612 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.884858 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.874574541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.884969 1136586 docker.go:319] overlay module found
	I1208 01:53:26.888166 1136586 out.go:179] * Using the docker driver based on existing profile
	I1208 01:53:26.891132 1136586 start.go:309] selected driver: docker
	I1208 01:53:26.891162 1136586 start.go:927] validating driver "docker" against &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.891271 1136586 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:53:26.892009 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.946578 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.937487208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.946934 1136586 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:53:26.946970 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:26.947032 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:26.947088 1136586 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.951997 1136586 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:53:26.954840 1136586 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:53:26.957745 1136586 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:53:26.960653 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:26.960709 1136586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:53:26.960722 1136586 cache.go:65] Caching tarball of preloaded images
	I1208 01:53:26.960734 1136586 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:53:26.960819 1136586 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:53:26.960831 1136586 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:53:26.961033 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:26.980599 1136586 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:53:26.980630 1136586 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:53:26.980646 1136586 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:53:26.980676 1136586 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:53:26.980741 1136586 start.go:364] duration metric: took 41.994µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:53:26.980766 1136586 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:53:26.980775 1136586 fix.go:54] fixHost starting: 
	I1208 01:53:26.981064 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:26.998167 1136586 fix.go:112] recreateIfNeeded on newest-cni-457779: state=Stopped err=<nil>
	W1208 01:53:26.998205 1136586 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:53:25.593347 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:27.593483 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:30.093460 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:27.003360 1136586 out.go:252] * Restarting existing docker container for "newest-cni-457779" ...
	I1208 01:53:27.003497 1136586 cli_runner.go:164] Run: docker start newest-cni-457779
	I1208 01:53:27.261076 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:27.282732 1136586 kic.go:430] container "newest-cni-457779" state is running.
	I1208 01:53:27.283122 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:27.311045 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:27.311287 1136586 machine.go:94] provisionDockerMachine start ...
	I1208 01:53:27.311346 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:27.335078 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:27.335680 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:27.335692 1136586 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:53:27.336739 1136586 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:53:30.502303 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.502328 1136586 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:53:30.502403 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.520473 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.520821 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.520832 1136586 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:53:30.680340 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.680522 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.698887 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.699207 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.699230 1136586 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:53:30.850881 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:53:30.850907 1136586 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:53:30.850931 1136586 ubuntu.go:190] setting up certificates
	I1208 01:53:30.850939 1136586 provision.go:84] configureAuth start
	I1208 01:53:30.851000 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:30.868852 1136586 provision.go:143] copyHostCerts
	I1208 01:53:30.868925 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:53:30.868935 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:53:30.869018 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:53:30.869113 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:53:30.869119 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:53:30.869143 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:53:30.869192 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:53:30.869197 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:53:30.869218 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:53:30.869262 1136586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:53:31.146721 1136586 provision.go:177] copyRemoteCerts
	I1208 01:53:31.146819 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:53:31.146887 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.165202 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.270344 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:53:31.288520 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:53:31.307009 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:53:31.325139 1136586 provision.go:87] duration metric: took 474.176778ms to configureAuth
	I1208 01:53:31.325166 1136586 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:53:31.325413 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:31.325428 1136586 machine.go:97] duration metric: took 4.014132188s to provisionDockerMachine
	I1208 01:53:31.325438 1136586 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:53:31.325453 1136586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:53:31.325527 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:53:31.325572 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.342958 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.450484 1136586 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:53:31.453930 1136586 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:53:31.453961 1136586 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:53:31.453978 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:53:31.454035 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:53:31.454126 1136586 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:53:31.454236 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:53:31.461814 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:31.480492 1136586 start.go:296] duration metric: took 155.029827ms for postStartSetup
	I1208 01:53:31.480576 1136586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:53:31.480620 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.498567 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.608416 1136586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:53:31.613302 1136586 fix.go:56] duration metric: took 4.632518901s for fixHost
	I1208 01:53:31.613327 1136586 start.go:83] releasing machines lock for "newest-cni-457779", held for 4.632572375s
	I1208 01:53:31.613414 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:31.630699 1136586 ssh_runner.go:195] Run: cat /version.json
	I1208 01:53:31.630750 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.630785 1136586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:53:31.630847 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.650759 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.653824 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.754273 1136586 ssh_runner.go:195] Run: systemctl --version
	I1208 01:53:31.849639 1136586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:53:31.855754 1136586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:53:31.855850 1136586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:53:31.866557 1136586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:53:31.866588 1136586 start.go:496] detecting cgroup driver to use...
	I1208 01:53:31.866621 1136586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:53:31.866707 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:53:31.887994 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:53:31.906727 1136586 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:53:31.906830 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:53:31.922954 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:53:31.936664 1136586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:53:32.054316 1136586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:53:32.173483 1136586 docker.go:234] disabling docker service ...
	I1208 01:53:32.173578 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:53:32.189444 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:53:32.206742 1136586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:53:32.325262 1136586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:53:32.443602 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:53:32.456770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:53:32.473213 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:53:32.483724 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:53:32.493138 1136586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:53:32.493251 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:53:32.502652 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.512217 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:53:32.521333 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.530989 1136586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:53:32.539889 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:53:32.549127 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:53:32.558425 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:53:32.567684 1136586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:53:32.575542 1136586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:53:32.583139 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:32.723777 1136586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:53:32.846014 1136586 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:53:32.846088 1136586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:53:32.849865 1136586 start.go:564] Will wait 60s for crictl version
	I1208 01:53:32.849924 1136586 ssh_runner.go:195] Run: which crictl
	I1208 01:53:32.853562 1136586 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:53:32.880330 1136586 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:53:32.880452 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.901579 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.928462 1136586 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:53:32.931363 1136586 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:53:32.945897 1136586 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:53:32.950021 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:32.963090 1136586 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1208 01:53:32.593363 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:33.093099 1128548 node_ready.go:38] duration metric: took 6m0.00024354s for node "no-preload-536520" to be "Ready" ...
	I1208 01:53:33.096356 1128548 out.go:203] 
	W1208 01:53:33.099424 1128548 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:53:33.099449 1128548 out.go:285] * 
	W1208 01:53:33.101601 1128548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:53:33.103637 1128548 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012707347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012722928Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012784722Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012807458Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012980408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012995694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013007829Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013026414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013056306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013096109Z" level=info msg="Connect containerd service"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013397585Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.014248932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.024764086Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.024952617Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.025010275Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.025073168Z" level=info msg="Start recovering state"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046219867Z" level=info msg="Start event monitor"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046299482Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046310116Z" level=info msg="Start streaming server"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046320315Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046329185Z" level=info msg="runtime interface starting up..."
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046337029Z" level=info msg="starting plugins..."
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046369292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:47:31 no-preload-536520 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.048165739Z" level=info msg="containerd successfully booted in 0.067149s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:53:34.655104    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:34.657968    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:34.658827    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:34.661085    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:34.661380    3998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:53:34 up  6:36,  0 user,  load average: 1.22, 0.87, 1.47
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:53:31 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:31 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 08 01:53:31 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:31 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:31 no-preload-536520 kubelet[3880]: E1208 01:53:31.893639    3880 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:31 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:31 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:32 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 08 01:53:32 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:32 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:32 no-preload-536520 kubelet[3886]: E1208 01:53:32.661447    3886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:32 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:32 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:33 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 08 01:53:33 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:33 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:33 no-preload-536520 kubelet[3892]: E1208 01:53:33.465245    3892 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:33 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:33 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:34 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 08 01:53:34 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:34 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:34 no-preload-536520 kubelet[3930]: E1208 01:53:34.218753    3930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:34 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:34 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 2 (492.603762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (90.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1208 01:51:59.314922  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:52:12.527878  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:52:27.016949  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m28.791855295s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-457779
helpers_test.go:243: (dbg) docker inspect newest-cni-457779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	        "Created": "2025-12-08T01:43:39.768991386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1122247,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:43:39.838290223Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hostname",
	        "HostsPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hosts",
	        "LogPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515-json.log",
	        "Name": "/newest-cni-457779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-457779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-457779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	                "LowerDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-457779",
	                "Source": "/var/lib/docker/volumes/newest-cni-457779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-457779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-457779",
	                "name.minikube.sigs.k8s.io": "newest-cni-457779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7e1afe08172b5d6e1b59898e41a0a10f530b283274a009e928ed8f8bd2ac007",
	            "SandboxKey": "/var/run/docker/netns/b7e1afe08172",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-457779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:c8:ef:fa:a0:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e759035a3431798f7b6fae1fcd872afa7240c356fb1da4c53589714768a6edc3",
	                    "EndpointID": "2d01411269374733ce9c99388d7ff970811ced41e065bd82a2eb4412dd772c8f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-457779",
	                        "638bfd2d42fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779: exit status 6 (359.960008ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:53:23.873048 1136059 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-719683 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:39 UTC │
	│ start   │ -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:39 UTC │ 08 Dec 25 01:40 UTC │
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p no-preload-536520 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ start   │ -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:51 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:47:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:47:25.144534 1128548 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:47:25.144678 1128548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:47:25.144689 1128548 out.go:374] Setting ErrFile to fd 2...
	I1208 01:47:25.144694 1128548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:47:25.144937 1128548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:47:25.145300 1128548 out.go:368] Setting JSON to false
	I1208 01:47:25.146183 1128548 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23398,"bootTime":1765135047,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:47:25.146250 1128548 start.go:143] virtualization:  
	I1208 01:47:25.149287 1128548 out.go:179] * [no-preload-536520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:47:25.152995 1128548 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:47:25.153086 1128548 notify.go:221] Checking for updates...
	I1208 01:47:25.159060 1128548 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:47:25.162001 1128548 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:25.164903 1128548 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:47:25.167734 1128548 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:47:25.170597 1128548 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:47:25.173940 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:25.174644 1128548 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:47:25.196090 1128548 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:47:25.196210 1128548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:47:25.265905 1128548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:47:25.255621287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:47:25.266017 1128548 docker.go:319] overlay module found
	I1208 01:47:25.271179 1128548 out.go:179] * Using the docker driver based on existing profile
	I1208 01:47:25.274140 1128548 start.go:309] selected driver: docker
	I1208 01:47:25.274161 1128548 start.go:927] validating driver "docker" against &{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:25.274269 1128548 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:47:25.275138 1128548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:47:25.329239 1128548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:47:25.319858232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:47:25.329577 1128548 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 01:47:25.329613 1128548 cni.go:84] Creating CNI manager for ""
	I1208 01:47:25.329682 1128548 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:47:25.329727 1128548 start.go:353] cluster config:
	{Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:25.332831 1128548 out.go:179] * Starting "no-preload-536520" primary control-plane node in "no-preload-536520" cluster
	I1208 01:47:25.335606 1128548 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:47:25.338488 1128548 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:47:25.341508 1128548 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:47:25.341637 1128548 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:47:25.341965 1128548 cache.go:107] acquiring lock: {Name:mk26e7e88ac6993c5141f2d02121dfa2fc547fd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342041 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1208 01:47:25.342050 1128548 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.869µs
	I1208 01:47:25.342063 1128548 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1208 01:47:25.342075 1128548 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:47:25.342168 1128548 cache.go:107] acquiring lock: {Name:mk0f1b4d6e089d68a7c2b058d311e225652853b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342209 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1208 01:47:25.342215 1128548 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 51.192µs
	I1208 01:47:25.342221 1128548 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342231 1128548 cache.go:107] acquiring lock: {Name:mk597bd9b4cd05f2d1a0093859d8b23b8ea1cd1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342263 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1208 01:47:25.342268 1128548 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 38.556µs
	I1208 01:47:25.342274 1128548 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342284 1128548 cache.go:107] acquiring lock: {Name:mka22e7ada81429241ca2443bce21a3f31b8eb66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342310 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1208 01:47:25.342315 1128548 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.83µs
	I1208 01:47:25.342323 1128548 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342334 1128548 cache.go:107] acquiring lock: {Name:mkfea4ee3c261ad6c1d7efee63fc672216a4c310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342372 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1208 01:47:25.342381 1128548 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 47.246µs
	I1208 01:47:25.342387 1128548 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1208 01:47:25.342398 1128548 cache.go:107] acquiring lock: {Name:mk8813c8ba18f703b4246d4ffd8656e53b0f2ec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342424 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1208 01:47:25.342429 1128548 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 32µs
	I1208 01:47:25.342434 1128548 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1208 01:47:25.342566 1128548 cache.go:107] acquiring lock: {Name:mk98329aaba04bc9ea4839996e52989df0918014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342601 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1208 01:47:25.342606 1128548 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 163.997µs
	I1208 01:47:25.342654 1128548 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1208 01:47:25.342670 1128548 cache.go:107] acquiring lock: {Name:mk58db1a89606bc77924fd68a726167dcd840a38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.342704 1128548 cache.go:115] /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1208 01:47:25.342709 1128548 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 41.297µs
	I1208 01:47:25.342715 1128548 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1208 01:47:25.342736 1128548 cache.go:87] Successfully saved all images to host disk.
	I1208 01:47:25.366899 1128548 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:47:25.366925 1128548 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:47:25.366941 1128548 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:47:25.366970 1128548 start.go:360] acquireMachinesLock for no-preload-536520: {Name:mkcfe59c9f9ccdd77be288a5dfb4e3b57f6ad839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:47:25.367026 1128548 start.go:364] duration metric: took 36.948µs to acquireMachinesLock for "no-preload-536520"
	I1208 01:47:25.367050 1128548 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:47:25.367061 1128548 fix.go:54] fixHost starting: 
	I1208 01:47:25.367326 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:25.386712 1128548 fix.go:112] recreateIfNeeded on no-preload-536520: state=Stopped err=<nil>
	W1208 01:47:25.386739 1128548 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:47:25.390105 1128548 out.go:252] * Restarting existing docker container for "no-preload-536520" ...
	I1208 01:47:25.390183 1128548 cli_runner.go:164] Run: docker start no-preload-536520
	I1208 01:47:25.647203 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:25.672717 1128548 kic.go:430] container "no-preload-536520" state is running.
	I1208 01:47:25.673659 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:25.696605 1128548 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/config.json ...
	I1208 01:47:25.696850 1128548 machine.go:94] provisionDockerMachine start ...
	I1208 01:47:25.696917 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:25.719881 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:25.720251 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:25.720267 1128548 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:47:25.721006 1128548 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:47:28.871078 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:47:28.871100 1128548 ubuntu.go:182] provisioning hostname "no-preload-536520"
	I1208 01:47:28.871170 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:28.890322 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:28.890666 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:28.890685 1128548 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-536520 && echo "no-preload-536520" | sudo tee /etc/hostname
	I1208 01:47:29.051907 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-536520
	
	I1208 01:47:29.051990 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.071065 1128548 main.go:143] libmachine: Using SSH client type: native
	I1208 01:47:29.071389 1128548 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1208 01:47:29.071409 1128548 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-536520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-536520/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-536520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:47:29.230899 1128548 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:47:29.230991 1128548 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:47:29.231044 1128548 ubuntu.go:190] setting up certificates
	I1208 01:47:29.231075 1128548 provision.go:84] configureAuth start
	I1208 01:47:29.231167 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:29.253696 1128548 provision.go:143] copyHostCerts
	I1208 01:47:29.253770 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:47:29.253785 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:47:29.253863 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:47:29.253977 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:47:29.253988 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:47:29.254015 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:47:29.254069 1128548 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:47:29.254079 1128548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:47:29.254103 1128548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:47:29.254158 1128548 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.no-preload-536520 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-536520]
	I1208 01:47:29.311134 1128548 provision.go:177] copyRemoteCerts
	I1208 01:47:29.311266 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:47:29.311345 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.328954 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.438688 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:47:29.457711 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:47:29.476430 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:47:29.494899 1128548 provision.go:87] duration metric: took 263.788812ms to configureAuth
	I1208 01:47:29.494927 1128548 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:47:29.495124 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:29.495136 1128548 machine.go:97] duration metric: took 3.798277669s to provisionDockerMachine
	I1208 01:47:29.495144 1128548 start.go:293] postStartSetup for "no-preload-536520" (driver="docker")
	I1208 01:47:29.495155 1128548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:47:29.495213 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:47:29.495257 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.514045 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.618827 1128548 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:47:29.622435 1128548 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:47:29.622486 1128548 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:47:29.622498 1128548 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:47:29.622555 1128548 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:47:29.622644 1128548 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:47:29.622753 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:47:29.630547 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:47:29.648755 1128548 start.go:296] duration metric: took 153.595239ms for postStartSetup
	I1208 01:47:29.648836 1128548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:47:29.648887 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.666163 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.767705 1128548 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:47:29.772600 1128548 fix.go:56] duration metric: took 4.405531603s for fixHost
	I1208 01:47:29.772626 1128548 start.go:83] releasing machines lock for "no-preload-536520", held for 4.405586815s
	I1208 01:47:29.772706 1128548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-536520
	I1208 01:47:29.789841 1128548 ssh_runner.go:195] Run: cat /version.json
	I1208 01:47:29.789937 1128548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:47:29.789945 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.789996 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:29.812717 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:29.816251 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:30.016925 1128548 ssh_runner.go:195] Run: systemctl --version
	I1208 01:47:30.052830 1128548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:47:30.068491 1128548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:47:30.068623 1128548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:47:30.091127 1128548 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:47:30.091213 1128548 start.go:496] detecting cgroup driver to use...
	I1208 01:47:30.092843 1128548 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:47:30.092996 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:47:30.118949 1128548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:47:30.135926 1128548 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:47:30.136004 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:47:30.154610 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:47:30.169484 1128548 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:47:30.284887 1128548 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:47:30.401821 1128548 docker.go:234] disabling docker service ...
	I1208 01:47:30.401907 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:47:30.417825 1128548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:47:30.431329 1128548 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:47:30.549687 1128548 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:47:30.701450 1128548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:47:30.714761 1128548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:47:30.729964 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:47:30.740571 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:47:30.749764 1128548 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:47:30.749912 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:47:30.759343 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:47:30.768528 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:47:30.777543 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:47:30.786277 1128548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:47:30.795137 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:47:30.804067 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:47:30.812868 1128548 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:47:30.821824 1128548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:47:30.829778 1128548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:47:30.837363 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:30.946320 1128548 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:47:31.048379 1128548 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:47:31.048491 1128548 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:47:31.052641 1128548 start.go:564] Will wait 60s for crictl version
	I1208 01:47:31.052748 1128548 ssh_runner.go:195] Run: which crictl
	I1208 01:47:31.056733 1128548 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:47:31.082010 1128548 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:47:31.082131 1128548 ssh_runner.go:195] Run: containerd --version
	I1208 01:47:31.107752 1128548 ssh_runner.go:195] Run: containerd --version
	I1208 01:47:31.134903 1128548 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:47:31.137907 1128548 cli_runner.go:164] Run: docker network inspect no-preload-536520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:47:31.155436 1128548 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1208 01:47:31.159751 1128548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:47:31.170770 1128548 kubeadm.go:884] updating cluster {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:47:31.170896 1128548 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:47:31.170961 1128548 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:47:31.196822 1128548 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:47:31.196850 1128548 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:47:31.196858 1128548 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:47:31.196959 1128548 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-536520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:47:31.197036 1128548 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:47:31.226544 1128548 cni.go:84] Creating CNI manager for ""
	I1208 01:47:31.226567 1128548 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:47:31.226586 1128548 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 01:47:31.226628 1128548 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-536520 NodeName:no-preload-536520 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:47:31.226797 1128548 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-536520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:47:31.226877 1128548 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:47:31.234989 1128548 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:47:31.235078 1128548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:47:31.242869 1128548 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:47:31.261281 1128548 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:47:31.274779 1128548 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1208 01:47:31.288252 1128548 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:47:31.292107 1128548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:47:31.302333 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:31.430216 1128548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:47:31.454999 1128548 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520 for IP: 192.168.85.2
	I1208 01:47:31.455023 1128548 certs.go:195] generating shared ca certs ...
	I1208 01:47:31.455040 1128548 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:31.455242 1128548 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:47:31.455311 1128548 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:47:31.455324 1128548 certs.go:257] generating profile certs ...
	I1208 01:47:31.456430 1128548 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.key
	I1208 01:47:31.456527 1128548 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key.759f0035
	I1208 01:47:31.456618 1128548 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key
	I1208 01:47:31.456780 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:47:31.456840 1128548 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:47:31.456857 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:47:31.456908 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:47:31.457070 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:47:31.457132 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:47:31.457218 1128548 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:47:31.457887 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:47:31.480298 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:47:31.500772 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:47:31.521065 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:47:31.540174 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:47:31.558342 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:47:31.576697 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:47:31.595322 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:47:31.613726 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:47:31.631953 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:47:31.650290 1128548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:47:31.669394 1128548 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:47:31.682263 1128548 ssh_runner.go:195] Run: openssl version
	I1208 01:47:31.688781 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.696556 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:47:31.704170 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.707971 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.708038 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:47:31.749348 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:47:31.756878 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.764251 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:47:31.771591 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.775408 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.775525 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:47:31.817721 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:47:31.825386 1128548 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.832732 1128548 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:47:31.840622 1128548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.844445 1128548 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.844541 1128548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:47:31.885480 1128548 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:47:31.892792 1128548 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:47:31.896494 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:47:31.937567 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:47:31.978872 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:47:32.020623 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:47:32.062625 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:47:32.104715 1128548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:47:32.148960 1128548 kubeadm.go:401] StartCluster: {Name:no-preload-536520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-536520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:47:32.149129 1128548 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:47:32.149249 1128548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:47:32.177182 1128548 cri.go:89] found id: ""
	I1208 01:47:32.177302 1128548 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:47:32.185467 1128548 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:47:32.185537 1128548 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:47:32.185605 1128548 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:47:32.193198 1128548 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:47:32.193631 1128548 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-536520" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:32.193734 1128548 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-536520" cluster setting kubeconfig missing "no-preload-536520" context setting]
	I1208 01:47:32.194046 1128548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.195334 1128548 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:47:32.203280 1128548 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1208 01:47:32.203313 1128548 kubeadm.go:602] duration metric: took 17.762571ms to restartPrimaryControlPlane
	I1208 01:47:32.203323 1128548 kubeadm.go:403] duration metric: took 54.376484ms to StartCluster
	I1208 01:47:32.203356 1128548 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.203428 1128548 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:47:32.204022 1128548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:47:32.204232 1128548 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:47:32.204520 1128548 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:47:32.204571 1128548 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:47:32.204638 1128548 addons.go:70] Setting storage-provisioner=true in profile "no-preload-536520"
	I1208 01:47:32.204677 1128548 addons.go:239] Setting addon storage-provisioner=true in "no-preload-536520"
	I1208 01:47:32.204699 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.205163 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.205504 1128548 addons.go:70] Setting default-storageclass=true in profile "no-preload-536520"
	I1208 01:47:32.205524 1128548 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-536520"
	I1208 01:47:32.205784 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.205965 1128548 addons.go:70] Setting dashboard=true in profile "no-preload-536520"
	I1208 01:47:32.205979 1128548 addons.go:239] Setting addon dashboard=true in "no-preload-536520"
	W1208 01:47:32.205986 1128548 addons.go:248] addon dashboard should already be in state true
	I1208 01:47:32.206006 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.206434 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.211155 1128548 out.go:179] * Verifying Kubernetes components...
	I1208 01:47:32.214232 1128548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:47:32.243438 1128548 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:47:32.247673 1128548 addons.go:239] Setting addon default-storageclass=true in "no-preload-536520"
	I1208 01:47:32.247722 1128548 host.go:66] Checking if "no-preload-536520" exists ...
	I1208 01:47:32.248146 1128548 cli_runner.go:164] Run: docker container inspect no-preload-536520 --format={{.State.Status}}
	I1208 01:47:32.248283 1128548 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:47:32.249393 1128548 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:47:32.249422 1128548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:47:32.249479 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.254001 1128548 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:47:32.258300 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:47:32.258329 1128548 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:47:32.258396 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.286961 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.294498 1128548 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:32.294541 1128548 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:47:32.294621 1128548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-536520
	I1208 01:47:32.308511 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.336182 1128548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/no-preload-536520/id_rsa Username:docker}
	I1208 01:47:32.455434 1128548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:47:32.503421 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:47:32.512052 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:47:32.512091 1128548 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:47:32.529651 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:47:32.529679 1128548 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:47:32.538583 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:32.564793 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:47:32.564828 1128548 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:47:32.600166 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:47:32.600234 1128548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:47:32.620391 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:47:32.620414 1128548 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:47:32.635342 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:47:32.635368 1128548 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:47:32.649355 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:47:32.649379 1128548 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:47:32.663687 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:47:32.663714 1128548 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:47:32.677279 1128548 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:32.677303 1128548 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:47:32.690968 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:33.092759 1128548 node_ready.go:35] waiting up to 6m0s for node "no-preload-536520" to be "Ready" ...
	W1208 01:47:33.093211 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093277 1128548 retry.go:31] will retry after 135.377583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.093354 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093401 1128548 retry.go:31] will retry after 356.085059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.093693 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.093739 1128548 retry.go:31] will retry after 290.352829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.229413 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:33.295712 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.295753 1128548 retry.go:31] will retry after 504.528201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.385144 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:33.450468 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:33.455498 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.455579 1128548 retry.go:31] will retry after 210.308534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.513454 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.513515 1128548 retry.go:31] will retry after 261.594769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.666341 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:33.730275 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.730367 1128548 retry.go:31] will retry after 515.285755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.775591 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:33.801214 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:33.874343 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.874382 1128548 retry.go:31] will retry after 373.513153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:33.879699 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:33.879734 1128548 retry.go:31] will retry after 640.492075ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.246844 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:47:34.248194 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:34.323597 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.323634 1128548 retry.go:31] will retry after 1.019529809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:34.339043 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.339089 1128548 retry.go:31] will retry after 1.209309516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.520466 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:34.578496 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:34.578579 1128548 retry.go:31] will retry after 641.799617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:35.094210 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:35.220879 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:35.291769 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.291804 1128548 retry.go:31] will retry after 1.824974972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.343984 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:35.420724 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.420761 1128548 retry.go:31] will retry after 1.505282353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.548906 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:35.619439 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:35.619474 1128548 retry.go:31] will retry after 1.475994436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:36.927068 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:36.990909 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:36.990950 1128548 retry.go:31] will retry after 1.384042047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.095678 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:47:37.117700 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:37.162835 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.162944 1128548 retry.go:31] will retry after 2.706380277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:37.196118 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:37.196155 1128548 retry.go:31] will retry after 2.546989667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:37.593980 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:38.375563 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:38.448945 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:38.448983 1128548 retry.go:31] will retry after 4.228344134s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.743778 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:39.805386 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.805423 1128548 retry.go:31] will retry after 1.941295739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.869521 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:39.940830 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:39.940861 1128548 retry.go:31] will retry after 1.677329859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:40.093478 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:41.618647 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:41.680452 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.680482 1128548 retry.go:31] will retry after 3.415857651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.747716 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:41.810998 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:41.811032 1128548 retry.go:31] will retry after 4.001958095s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:42.593808 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:42.678234 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:42.741215 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:42.741256 1128548 retry.go:31] will retry after 4.935696048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:45.093503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:45.096924 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:45.182970 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.183006 1128548 retry.go:31] will retry after 5.169461339s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.814151 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:45.895671 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:45.895706 1128548 retry.go:31] will retry after 6.069460108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:47.593429 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:47.677825 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:47.741017 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:47.741053 1128548 retry.go:31] will retry after 6.358930969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:50.093389 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:50.240219 1121810 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.005281202s
	I1208 01:47:50.240251 1121810 kubeadm.go:319] 
	I1208 01:47:50.240305 1121810 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:47:50.240337 1121810 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:47:50.240436 1121810 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:47:50.240441 1121810 kubeadm.go:319] 
	I1208 01:47:50.240540 1121810 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:47:50.240570 1121810 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:47:50.240599 1121810 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:47:50.240604 1121810 kubeadm.go:319] 
	I1208 01:47:50.244623 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:47:50.245144 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:47:50.245269 1121810 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:47:50.245523 1121810 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:47:50.245536 1121810 kubeadm.go:319] 
	I1208 01:47:50.245606 1121810 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1208 01:47:50.245750 1121810 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-457779] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.005281202s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1208 01:47:50.245846 1121810 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1208 01:47:50.662033 1121810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:47:50.675434 1121810 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 01:47:50.675505 1121810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 01:47:50.683543 1121810 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 01:47:50.683562 1121810 kubeadm.go:158] found existing configuration files:
	
	I1208 01:47:50.683614 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 01:47:50.691591 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 01:47:50.691654 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 01:47:50.699350 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 01:47:50.707376 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 01:47:50.707443 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 01:47:50.715135 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 01:47:50.723172 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 01:47:50.723260 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 01:47:50.731275 1121810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 01:47:50.739267 1121810 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 01:47:50.739332 1121810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 01:47:50.747081 1121810 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 01:47:50.787448 1121810 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1208 01:47:50.787678 1121810 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 01:47:50.866311 1121810 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 01:47:50.866389 1121810 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 01:47:50.866433 1121810 kubeadm.go:319] OS: Linux
	I1208 01:47:50.866508 1121810 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 01:47:50.866562 1121810 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 01:47:50.866613 1121810 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 01:47:50.866667 1121810 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 01:47:50.866720 1121810 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 01:47:50.866772 1121810 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 01:47:50.866821 1121810 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 01:47:50.866871 1121810 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 01:47:50.866921 1121810 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 01:47:50.933039 1121810 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 01:47:50.933171 1121810 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 01:47:50.933266 1121810 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 01:47:50.942872 1121810 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 01:47:50.948105 1121810 out.go:252]   - Generating certificates and keys ...
	I1208 01:47:50.948200 1121810 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 01:47:50.948265 1121810 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 01:47:50.948342 1121810 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1208 01:47:50.948403 1121810 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1208 01:47:50.948473 1121810 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1208 01:47:50.948531 1121810 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1208 01:47:50.948594 1121810 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1208 01:47:50.948661 1121810 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1208 01:47:50.948735 1121810 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1208 01:47:50.948807 1121810 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1208 01:47:50.948845 1121810 kubeadm.go:319] [certs] Using the existing "sa" key
	I1208 01:47:50.948901 1121810 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 01:47:51.112853 1121810 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 01:47:51.634368 1121810 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 01:47:51.809543 1121810 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 01:47:52.224203 1121810 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 01:47:52.422413 1121810 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 01:47:52.423095 1121810 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 01:47:52.425801 1121810 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 01:47:52.429055 1121810 out.go:252]   - Booting up control plane ...
	I1208 01:47:52.429171 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 01:47:52.429263 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 01:47:52.429335 1121810 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 01:47:52.449800 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 01:47:52.449912 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 01:47:52.458225 1121810 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 01:47:52.458717 1121810 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 01:47:52.458770 1121810 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 01:47:52.594110 1121810 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 01:47:52.594228 1121810 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 01:47:50.353033 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:47:50.436848 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:50.436881 1128548 retry.go:31] will retry after 13.13295311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:51.966065 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:47:52.046307 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:52.046338 1128548 retry.go:31] will retry after 13.071324249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:52.093819 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:47:54.094117 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:47:54.100443 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:47:54.199978 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:47:54.200013 1128548 retry.go:31] will retry after 5.921409717s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:47:56.593485 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:47:58.594917 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:00.169117 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:00.465737 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:00.465774 1128548 retry.go:31] will retry after 20.435782348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:01.093648 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:03.570310 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:03.593451 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:03.632880 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:03.632912 1128548 retry.go:31] will retry after 21.217435615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:05.117897 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:05.178316 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:05.178352 1128548 retry.go:31] will retry after 19.478477459s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:05.594138 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:08.093454 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:10.593499 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:13.093411 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:15.093543 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:17.593421 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:20.093502 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:20.902007 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:20.961724 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:20.961758 1128548 retry.go:31] will retry after 19.271074882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:22.593561 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:24.657892 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:24.716774 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.716812 1128548 retry.go:31] will retry after 21.882989692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.851274 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:24.908152 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:24.908186 1128548 retry.go:31] will retry after 13.56417867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:25.093911 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:27.593411 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:29.594271 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:32.093606 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:34.094178 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:36.593455 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:38.472558 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:48:38.531541 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:38.531575 1128548 retry.go:31] will retry after 35.735118355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:38.593962 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:40.233963 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:48:40.295686 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:40.295723 1128548 retry.go:31] will retry after 24.954393837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:41.093636 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:43.094034 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:45.593528 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:48:46.601180 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:48:46.663105 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:48:46.663141 1128548 retry.go:31] will retry after 26.276311259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:48:47.594156 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:50.093521 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:52.593610 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:55.093548 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:57.094350 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:48:59.593344 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:01.593410 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:03.594090 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:49:05.250844 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:49:05.313013 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:05.313123 1128548 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1208 01:49:06.093980 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:08.094120 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:10.094394 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:12.593567 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:49:12.940260 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:49:12.998388 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:12.998515 1128548 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:49:14.267569 1128548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:49:14.332970 1128548 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:49:14.333076 1128548 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:49:14.337901 1128548 out.go:179] * Enabled addons: 
	I1208 01:49:14.340893 1128548 addons.go:530] duration metric: took 1m42.136312022s for enable addons: enabled=[]
	W1208 01:49:14.593648 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:17.093978 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:19.094217 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:21.593630 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:24.093459 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:26.094330 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:28.593722 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:31.093470 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:33.593451 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:36.093495 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:38.593385 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:40.593647 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:43.093371 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:45.094369 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:47.593619 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:50.093597 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:52.093757 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:54.094300 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:56.593367 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:49:58.593542 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:00.593897 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:03.093529 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:05.593439 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:07.593566 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:10.093546 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:12.595550 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:15.093478 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:17.593462 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:20.093365 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:22.093598 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:24.094259 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:26.094930 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:28.594300 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:31.093474 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:33.593407 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:35.593599 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:38.093412 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:40.094354 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:42.593822 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:44.594150 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:47.093329 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:49.093541 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:51.593381 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:53.593623 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:56.093365 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:50:58.093467 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:00.093554 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:02.094325 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:04.593608 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:06.594220 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:09.093557 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:11.593588 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:13.594210 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:16.093513 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:18.593414 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:20.593693 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:22.594088 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:24.594405 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:27.093352 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:29.094118 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:31.593494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:33.605055 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:36.093573 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:38.593473 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:40.593558 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:43.093868 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:45.094213 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:47.593628 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:49.594227 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:51:52.594163 1121810 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000318083s
	I1208 01:51:52.594189 1121810 kubeadm.go:319] 
	I1208 01:51:52.594247 1121810 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1208 01:51:52.594280 1121810 kubeadm.go:319] 	- The kubelet is not running
	I1208 01:51:52.594385 1121810 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1208 01:51:52.594389 1121810 kubeadm.go:319] 
	I1208 01:51:52.594514 1121810 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1208 01:51:52.594548 1121810 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1208 01:51:52.594578 1121810 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1208 01:51:52.594582 1121810 kubeadm.go:319] 
	I1208 01:51:52.598647 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 01:51:52.599081 1121810 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1208 01:51:52.599190 1121810 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 01:51:52.599423 1121810 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1208 01:51:52.599429 1121810 kubeadm.go:319] 
	I1208 01:51:52.599545 1121810 kubeadm.go:403] duration metric: took 8m7.029694705s to StartCluster
	I1208 01:51:52.599580 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:51:52.599643 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:51:52.599710 1121810 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1208 01:51:52.631244 1121810 cri.go:89] found id: ""
	I1208 01:51:52.631271 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.631280 1121810 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:51:52.631288 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:51:52.631353 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:51:52.658412 1121810 cri.go:89] found id: ""
	I1208 01:51:52.658492 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.658502 1121810 logs.go:284] No container was found matching "etcd"
	I1208 01:51:52.658519 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:51:52.658610 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:51:52.685789 1121810 cri.go:89] found id: ""
	I1208 01:51:52.685814 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.685823 1121810 logs.go:284] No container was found matching "coredns"
	I1208 01:51:52.685829 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:51:52.685887 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:51:52.713200 1121810 cri.go:89] found id: ""
	I1208 01:51:52.713226 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.713235 1121810 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:51:52.713241 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:51:52.713299 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:51:52.737730 1121810 cri.go:89] found id: ""
	I1208 01:51:52.737756 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.737765 1121810 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:51:52.737771 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:51:52.737829 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:51:52.763894 1121810 cri.go:89] found id: ""
	I1208 01:51:52.763928 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.763937 1121810 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:51:52.763944 1121810 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:51:52.764012 1121810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:51:52.788698 1121810 cri.go:89] found id: ""
	I1208 01:51:52.788762 1121810 logs.go:282] 0 containers: []
	W1208 01:51:52.788777 1121810 logs.go:284] No container was found matching "kindnet"
	I1208 01:51:52.788788 1121810 logs.go:123] Gathering logs for kubelet ...
	I1208 01:51:52.788799 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:51:52.846920 1121810 logs.go:123] Gathering logs for dmesg ...
	I1208 01:51:52.846956 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:51:52.862048 1121810 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:51:52.862076 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:51:52.931760 1121810 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:51:52.923107    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.923875    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.925579    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.926258    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.927897    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:51:52.923107    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.923875    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.925579    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.926258    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:51:52.927897    4872 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:51:52.931799 1121810 logs.go:123] Gathering logs for containerd ...
	I1208 01:51:52.931812 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:51:52.970819 1121810 logs.go:123] Gathering logs for container status ...
	I1208 01:51:52.970855 1121810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1208 01:51:53.000445 1121810 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1208 01:51:53.000503 1121810 out.go:285] * 
	W1208 01:51:53.000560 1121810 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:51:53.000578 1121810 out.go:285] * 
	W1208 01:51:53.002833 1121810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:51:53.009552 1121810 out.go:203] 
	W1208 01:51:53.012504 1121810 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000318083s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1208 01:51:53.012580 1121810 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1208 01:51:53.012606 1121810 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1208 01:51:53.015855 1121810 out.go:203] 
	W1208 01:51:52.093569 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:54.093956 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:56.593503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:51:59.094349 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:01.594084 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:04.093361 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:06.093681 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:08.593337 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:10.593588 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:13.093342 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:15.593454 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:18.093494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:20.094567 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:22.593792 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:24.594162 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:26.594200 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:29.093494 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:31.093534 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:33.094175 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:35.600716 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:38.093539 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:40.593468 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:42.593754 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:45.093592 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:47.593331 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:49.593373 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:51.593536 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:53.593727 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:55.594095 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:52:58.093472 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:00.094264 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:02.593473 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:04.593600 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:07.094541 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:09.593597 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:11.594238 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:14.093490 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:16.093667 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:18.593503 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432724135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432797876Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432915423Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.432985135Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433045279Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433112291Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433175315Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433234597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433301470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.433386049Z" level=info msg="Connect containerd service"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.434110863Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.435147886Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.446860134Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.447070458Z" level=info msg="Start recovering state"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.447071435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.447339736Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482231401Z" level=info msg="Start event monitor"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482425824Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482616693Z" level=info msg="Start streaming server"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482689407Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482749281Z" level=info msg="runtime interface starting up..."
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482802434Z" level=info msg="starting plugins..."
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.482868026Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:43:43 newest-cni-457779 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:43:43 newest-cni-457779 containerd[758]: time="2025-12-08T01:43:43.484885964Z" level=info msg="containerd successfully booted in 0.079491s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:53:24.616717    5923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:24.617435    5923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:24.619104    5923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:24.619803    5923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:53:24.621583    5923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:53:24 up  6:35,  0 user,  load average: 0.89, 0.79, 1.45
	Linux newest-cni-457779 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:53:21 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 438.
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:22 newest-cni-457779 kubelet[5800]: E1208 01:53:22.155820    5800 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 439.
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:22 newest-cni-457779 kubelet[5806]: E1208 01:53:22.906086    5806 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:22 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:23 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 440.
	Dec 08 01:53:23 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:23 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:23 newest-cni-457779 kubelet[5817]: E1208 01:53:23.692709    5817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:23 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:23 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:53:24 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 441.
	Dec 08 01:53:24 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:24 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:53:24 newest-cni-457779 kubelet[5875]: E1208 01:53:24.433319    5875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:53:24 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:53:24 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 6 (378.650535ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:53:25.179808 1136290 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-457779" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (90.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (374.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m9.176184983s)

                                                
                                                
-- stdout --
	* [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:53:26.756000 1136586 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:53:26.756538 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756548 1136586 out.go:374] Setting ErrFile to fd 2...
	I1208 01:53:26.756553 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756842 1136586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:53:26.757268 1136586 out.go:368] Setting JSON to false
	I1208 01:53:26.758219 1136586 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23760,"bootTime":1765135047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:53:26.758285 1136586 start.go:143] virtualization:  
	I1208 01:53:26.761027 1136586 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:53:26.763300 1136586 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:53:26.763385 1136586 notify.go:221] Checking for updates...
	I1208 01:53:26.769236 1136586 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:53:26.772301 1136586 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:26.775351 1136586 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:53:26.778370 1136586 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:53:26.781331 1136586 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:53:26.784939 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:26.785587 1136586 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:53:26.821497 1136586 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:53:26.821612 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.884858 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.874574541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.884969 1136586 docker.go:319] overlay module found
	I1208 01:53:26.888166 1136586 out.go:179] * Using the docker driver based on existing profile
	I1208 01:53:26.891132 1136586 start.go:309] selected driver: docker
	I1208 01:53:26.891162 1136586 start.go:927] validating driver "docker" against &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.891271 1136586 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:53:26.892009 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.946578 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.937487208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.946934 1136586 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:53:26.946970 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:26.947032 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:26.947088 1136586 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.951997 1136586 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:53:26.954840 1136586 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:53:26.957745 1136586 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:53:26.960653 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:26.960709 1136586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:53:26.960722 1136586 cache.go:65] Caching tarball of preloaded images
	I1208 01:53:26.960734 1136586 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:53:26.960819 1136586 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:53:26.960831 1136586 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:53:26.961033 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:26.980599 1136586 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:53:26.980630 1136586 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:53:26.980646 1136586 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:53:26.980676 1136586 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:53:26.980741 1136586 start.go:364] duration metric: took 41.994µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:53:26.980766 1136586 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:53:26.980775 1136586 fix.go:54] fixHost starting: 
	I1208 01:53:26.981064 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:26.998167 1136586 fix.go:112] recreateIfNeeded on newest-cni-457779: state=Stopped err=<nil>
	W1208 01:53:26.998205 1136586 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 01:53:27.003360 1136586 out.go:252] * Restarting existing docker container for "newest-cni-457779" ...
	I1208 01:53:27.003497 1136586 cli_runner.go:164] Run: docker start newest-cni-457779
	I1208 01:53:27.261076 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:27.282732 1136586 kic.go:430] container "newest-cni-457779" state is running.
	I1208 01:53:27.283122 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:27.311045 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:27.311287 1136586 machine.go:94] provisionDockerMachine start ...
	I1208 01:53:27.311346 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:27.335078 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:27.335680 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:27.335692 1136586 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:53:27.336739 1136586 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:53:30.502303 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.502328 1136586 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:53:30.502403 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.520473 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.520821 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.520832 1136586 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:53:30.680340 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.680522 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.698887 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.699207 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.699230 1136586 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:53:30.850881 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:53:30.850907 1136586 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:53:30.850931 1136586 ubuntu.go:190] setting up certificates
	I1208 01:53:30.850939 1136586 provision.go:84] configureAuth start
	I1208 01:53:30.851000 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:30.868852 1136586 provision.go:143] copyHostCerts
	I1208 01:53:30.868925 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:53:30.868935 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:53:30.869018 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:53:30.869113 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:53:30.869119 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:53:30.869143 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:53:30.869192 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:53:30.869197 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:53:30.869218 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:53:30.869262 1136586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:53:31.146721 1136586 provision.go:177] copyRemoteCerts
	I1208 01:53:31.146819 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:53:31.146887 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.165202 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.270344 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:53:31.288520 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:53:31.307009 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:53:31.325139 1136586 provision.go:87] duration metric: took 474.176778ms to configureAuth
	I1208 01:53:31.325166 1136586 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:53:31.325413 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:31.325428 1136586 machine.go:97] duration metric: took 4.014132188s to provisionDockerMachine
	I1208 01:53:31.325438 1136586 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:53:31.325453 1136586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:53:31.325527 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:53:31.325572 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.342958 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.450484 1136586 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:53:31.453930 1136586 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:53:31.453961 1136586 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:53:31.453978 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:53:31.454035 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:53:31.454126 1136586 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:53:31.454236 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:53:31.461814 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:31.480492 1136586 start.go:296] duration metric: took 155.029827ms for postStartSetup
	I1208 01:53:31.480576 1136586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:53:31.480620 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.498567 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.608416 1136586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:53:31.613302 1136586 fix.go:56] duration metric: took 4.632518901s for fixHost
	I1208 01:53:31.613327 1136586 start.go:83] releasing machines lock for "newest-cni-457779", held for 4.632572375s
	I1208 01:53:31.613414 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:31.630699 1136586 ssh_runner.go:195] Run: cat /version.json
	I1208 01:53:31.630750 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.630785 1136586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:53:31.630847 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.650759 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.653824 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.754273 1136586 ssh_runner.go:195] Run: systemctl --version
	I1208 01:53:31.849639 1136586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:53:31.855754 1136586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:53:31.855850 1136586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:53:31.866557 1136586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:53:31.866588 1136586 start.go:496] detecting cgroup driver to use...
	I1208 01:53:31.866621 1136586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:53:31.866707 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:53:31.887994 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:53:31.906727 1136586 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:53:31.906830 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:53:31.922954 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:53:31.936664 1136586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:53:32.054316 1136586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:53:32.173483 1136586 docker.go:234] disabling docker service ...
	I1208 01:53:32.173578 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:53:32.189444 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:53:32.206742 1136586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:53:32.325262 1136586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:53:32.443602 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:53:32.456770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:53:32.473213 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:53:32.483724 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:53:32.493138 1136586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:53:32.493251 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:53:32.502652 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.512217 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:53:32.521333 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.530989 1136586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:53:32.539889 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:53:32.549127 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:53:32.558425 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:53:32.567684 1136586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:53:32.575542 1136586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:53:32.583139 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:32.723777 1136586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:53:32.846014 1136586 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:53:32.846088 1136586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:53:32.849865 1136586 start.go:564] Will wait 60s for crictl version
	I1208 01:53:32.849924 1136586 ssh_runner.go:195] Run: which crictl
	I1208 01:53:32.853562 1136586 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:53:32.880330 1136586 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:53:32.880452 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.901579 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.928462 1136586 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:53:32.931363 1136586 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:53:32.945897 1136586 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:53:32.950021 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:32.963090 1136586 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1208 01:53:32.966006 1136586 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:53:32.966181 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:32.966277 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.001671 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.001709 1136586 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:53:33.001783 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.037763 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.037789 1136586 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:53:33.037796 1136586 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:53:33.037895 1136586 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:53:33.037971 1136586 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:53:33.063762 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:33.063790 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:33.063814 1136586 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:53:33.063838 1136586 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:53:33.063976 1136586 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:53:33.064046 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:53:33.072124 1136586 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:53:33.072199 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:53:33.079978 1136586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:53:33.094440 1136586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:53:33.114285 1136586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:53:33.148370 1136586 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:53:33.154333 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:33.175383 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:33.368419 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:33.425889 1136586 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:53:33.425915 1136586 certs.go:195] generating shared ca certs ...
	I1208 01:53:33.425933 1136586 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:33.426101 1136586 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:53:33.426153 1136586 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:53:33.426161 1136586 certs.go:257] generating profile certs ...
	I1208 01:53:33.426267 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:53:33.426332 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:53:33.426377 1136586 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:53:33.426524 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:53:33.426568 1136586 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:53:33.426582 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:53:33.426612 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:53:33.426642 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:53:33.426669 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:53:33.426734 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:33.427335 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:53:33.467362 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:53:33.494653 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:53:33.520274 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:53:33.539143 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:53:33.558359 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:53:33.583585 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:53:33.606437 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:53:33.629051 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:53:33.649569 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:53:33.670329 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:53:33.709388 1136586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:53:33.723127 1136586 ssh_runner.go:195] Run: openssl version
	I1208 01:53:33.729848 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.737400 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:53:33.744968 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749630 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749695 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.792574 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:53:33.800140 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.812741 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:53:33.821534 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825755 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825831 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.873472 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:53:33.882187 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.890767 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:53:33.901446 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907874 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907943 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.952061 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:53:33.960568 1136586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:53:33.965214 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:53:34.008563 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:53:34.055484 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:53:34.112335 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:53:34.165388 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:53:34.216189 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:53:34.263034 1136586 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:34.263135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:53:34.263235 1136586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:53:34.294120 1136586 cri.go:89] found id: ""
	I1208 01:53:34.294243 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:53:34.304846 1136586 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:53:34.304879 1136586 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:53:34.304960 1136586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:53:34.316473 1136586 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:53:34.317189 1136586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.317527 1136586 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-457779" cluster setting kubeconfig missing "newest-cni-457779" context setting]
	I1208 01:53:34.318043 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.319993 1136586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:53:34.332564 1136586 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:53:34.332599 1136586 kubeadm.go:602] duration metric: took 27.712722ms to restartPrimaryControlPlane
	I1208 01:53:34.332638 1136586 kubeadm.go:403] duration metric: took 69.60712ms to StartCluster
	I1208 01:53:34.332662 1136586 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.332751 1136586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.333761 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.334050 1136586 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:53:34.334509 1136586 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:53:34.334590 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:34.334604 1136586 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-457779"
	I1208 01:53:34.334619 1136586 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-457779"
	I1208 01:53:34.334646 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.334654 1136586 addons.go:70] Setting dashboard=true in profile "newest-cni-457779"
	I1208 01:53:34.334664 1136586 addons.go:239] Setting addon dashboard=true in "newest-cni-457779"
	W1208 01:53:34.334680 1136586 addons.go:248] addon dashboard should already be in state true
	I1208 01:53:34.334701 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.335128 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.335222 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.338384 1136586 out.go:179] * Verifying Kubernetes components...
	I1208 01:53:34.338808 1136586 addons.go:70] Setting default-storageclass=true in profile "newest-cni-457779"
	I1208 01:53:34.338830 1136586 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-457779"
	I1208 01:53:34.339192 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.342236 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:34.384696 1136586 addons.go:239] Setting addon default-storageclass=true in "newest-cni-457779"
	I1208 01:53:34.384738 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.385173 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.395531 1136586 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:53:34.398489 1136586 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:53:34.401766 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:53:34.401802 1136586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:53:34.401870 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.413624 1136586 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:53:34.416611 1136586 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.416635 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:53:34.416703 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.446412 1136586 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.446432 1136586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:53:34.446519 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.468661 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.486870 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.495400 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.648143 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:34.791310 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:53:34.791383 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:53:34.801259 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.809204 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.852787 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:53:34.852815 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:53:34.976510 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:53:34.976546 1136586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:53:35.059518 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:53:35.059546 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:53:35.081694 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:53:35.081725 1136586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:53:35.097221 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:53:35.097249 1136586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:53:35.113396 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:53:35.113423 1136586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:53:35.128309 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:53:35.128332 1136586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:53:35.144063 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.144088 1136586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:53:35.163973 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.343568 1136586 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:53:35.343639 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:35.343728 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343749 1136586 retry.go:31] will retry after 313.237886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343796 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343802 1136586 retry.go:31] will retry after 267.065812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343986 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343999 1136586 retry.go:31] will retry after 357.870271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.611924 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:35.657423 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:35.685479 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.685507 1136586 retry.go:31] will retry after 235.819569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.702853 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:35.745089 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.745200 1136586 retry.go:31] will retry after 496.615001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.783116 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.783150 1136586 retry.go:31] will retry after 415.603405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.844207 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:35.922577 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:35.992239 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.992284 1136586 retry.go:31] will retry after 419.233092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.199657 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.242360 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.275822 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.275881 1136586 retry.go:31] will retry after 506.304834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:36.313961 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.313996 1136586 retry.go:31] will retry after 341.203132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.344211 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:36.412076 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:36.475666 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.475724 1136586 retry.go:31] will retry after 757.567155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.656038 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.717469 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.717504 1136586 retry.go:31] will retry after 858.45693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.782939 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.844509 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:36.857199 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.857314 1136586 retry.go:31] will retry after 1.254351113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.233554 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:37.293681 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.293719 1136586 retry.go:31] will retry after 1.120312347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.343808 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:37.576883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:37.657137 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.657170 1136586 retry.go:31] will retry after 1.273828893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.844396 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.111904 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:38.175735 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.175771 1136586 retry.go:31] will retry after 1.371961744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.344170 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.414206 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:38.473557 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.473592 1136586 retry.go:31] will retry after 1.305474532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.843968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.931790 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:38.991073 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.991107 1136586 retry.go:31] will retry after 2.323329318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.344538 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:39.548354 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:39.614499 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.614532 1136586 retry.go:31] will retry after 2.345376349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.779883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:39.839516 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.839550 1136586 retry.go:31] will retry after 1.632764803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.843744 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.343857 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.844131 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.314885 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:41.344468 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:41.399054 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.399086 1136586 retry.go:31] will retry after 1.628703977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.473438 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:41.539567 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.539608 1136586 retry.go:31] will retry after 4.6526683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.844314 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.960631 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:42.037435 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.037475 1136586 retry.go:31] will retry after 2.24839836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.343723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:42.843913 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.028344 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:43.092228 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.092267 1136586 retry.go:31] will retry after 6.138872071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.843812 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:44.286696 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:44.343910 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:44.363154 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.363184 1136586 retry.go:31] will retry after 4.885412288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.843802 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.344023 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.844504 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.193193 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:46.256318 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.256352 1136586 retry.go:31] will retry after 6.576205276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.344576 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.844679 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.344358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.843925 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.231766 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:49.249321 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:49.295577 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.295606 1136586 retry.go:31] will retry after 5.897796539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:49.321879 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.321913 1136586 retry.go:31] will retry after 5.135606393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.343793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.843777 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.844708 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.344109 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.844601 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.344090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.833191 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:52.843854 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:52.942603 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:52.942641 1136586 retry.go:31] will retry after 10.350172314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:53.344347 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:53.843800 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.343948 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.457681 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:54.519827 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.519864 1136586 retry.go:31] will retry after 12.267694675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.844117 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.193625 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:55.256579 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.256612 1136586 retry.go:31] will retry after 11.163170119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.343847 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.843783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.343814 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.844654 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.344616 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.844487 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.343880 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.843787 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.343848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.843826 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.343799 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.844518 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.343861 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.844575 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.343756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.844391 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:03.293666 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:54:03.344443 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:03.397612 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.397650 1136586 retry.go:31] will retry after 19.276295687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.844417 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.343968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.344710 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.843828 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.420172 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:06.484485 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.484519 1136586 retry.go:31] will retry after 9.376809348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.788188 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:54:06.843694 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:06.852042 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.852079 1136586 retry.go:31] will retry after 14.243902866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:07.344022 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:07.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.344592 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.844723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.344453 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.843950 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.344400 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.844496 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.343717 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.844737 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.344750 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.843793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.343904 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.343908 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.844260 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.344591 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.843791 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.862033 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:15.923558 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:15.923598 1136586 retry.go:31] will retry after 11.623443237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:16.344246 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:16.844386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.344635 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.344732 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.843932 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.344121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.844530 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.344183 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.844204 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.097241 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:21.169765 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.169803 1136586 retry.go:31] will retry after 14.268049825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.343856 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.844672 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.344587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.674615 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:22.733064 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.733093 1136586 retry.go:31] will retry after 25.324201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.844513 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.344392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.844423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.343928 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.844484 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.344404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.844721 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.344197 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.844678 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.343798 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.547765 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:27.612562 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.612601 1136586 retry.go:31] will retry after 28.822296594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.344385 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.344796 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.344407 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.844544 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.343765 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.844221 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.343845 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.844333 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.344526 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.844321 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:34.344033 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:34.344149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:34.370172 1136586 cri.go:89] found id: ""
	I1208 01:54:34.370196 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.370205 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:34.370211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:34.370269 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:34.395619 1136586 cri.go:89] found id: ""
	I1208 01:54:34.395642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.395650 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:34.395656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:34.395720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:34.422963 1136586 cri.go:89] found id: ""
	I1208 01:54:34.422993 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.423003 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:34.423009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:34.423074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:34.451846 1136586 cri.go:89] found id: ""
	I1208 01:54:34.451871 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.451879 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:34.451886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:34.451951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:34.480597 1136586 cri.go:89] found id: ""
	I1208 01:54:34.480622 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.480631 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:34.480638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:34.480728 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:34.505381 1136586 cri.go:89] found id: ""
	I1208 01:54:34.505412 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.505421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:34.505427 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:34.505486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:34.531276 1136586 cri.go:89] found id: ""
	I1208 01:54:34.531304 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.531313 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:34.531320 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:34.531384 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:34.556518 1136586 cri.go:89] found id: ""
	I1208 01:54:34.556542 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.556550 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:34.556566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:34.556578 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:34.613370 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:34.613408 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:34.628308 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:34.628338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:34.694181 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:34.694202 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:34.694216 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:34.720374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:34.720425 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:35.438126 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:35.498508 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:35.498543 1136586 retry.go:31] will retry after 43.888808015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:37.252653 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:37.264309 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:37.264385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:37.296827 1136586 cri.go:89] found id: ""
	I1208 01:54:37.296856 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.296865 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:37.296872 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:37.296938 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:37.322795 1136586 cri.go:89] found id: ""
	I1208 01:54:37.322818 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.322826 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:37.322832 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:37.322890 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:37.347015 1136586 cri.go:89] found id: ""
	I1208 01:54:37.347039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.347048 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:37.347054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:37.347112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:37.376654 1136586 cri.go:89] found id: ""
	I1208 01:54:37.376685 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.376694 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:37.376702 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:37.376768 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:37.402392 1136586 cri.go:89] found id: ""
	I1208 01:54:37.402419 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.402428 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:37.402434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:37.402531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:37.427265 1136586 cri.go:89] found id: ""
	I1208 01:54:37.427292 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.427302 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:37.427308 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:37.427375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:37.452009 1136586 cri.go:89] found id: ""
	I1208 01:54:37.452036 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.452046 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:37.452052 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:37.452113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:37.478250 1136586 cri.go:89] found id: ""
	I1208 01:54:37.478274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.478282 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:37.478292 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:37.478303 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:37.492990 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:37.493059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:37.560010 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:37.560033 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:37.560046 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:37.586791 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:37.586827 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:37.617527 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:37.617603 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.174865 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:40.187458 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:40.187538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:40.216164 1136586 cri.go:89] found id: ""
	I1208 01:54:40.216195 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.216204 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:40.216211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:40.216280 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:40.243524 1136586 cri.go:89] found id: ""
	I1208 01:54:40.243552 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.243561 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:40.243567 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:40.243632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:40.273554 1136586 cri.go:89] found id: ""
	I1208 01:54:40.273582 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.273592 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:40.273598 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:40.273660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:40.301228 1136586 cri.go:89] found id: ""
	I1208 01:54:40.301249 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.301257 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:40.301263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:40.301321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:40.330159 1136586 cri.go:89] found id: ""
	I1208 01:54:40.330179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.330187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:40.330193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:40.330252 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:40.355514 1136586 cri.go:89] found id: ""
	I1208 01:54:40.355583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.355604 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:40.355611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:40.355685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:40.381442 1136586 cri.go:89] found id: ""
	I1208 01:54:40.381468 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.381477 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:40.381483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:40.381539 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:40.406014 1136586 cri.go:89] found id: ""
	I1208 01:54:40.406039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.406048 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:40.406057 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:40.406069 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:40.465966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:40.465986 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:40.466000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:40.490766 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:40.490799 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:40.518111 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:40.518140 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.573667 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:40.573702 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.088883 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:43.112185 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:43.112253 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:43.175929 1136586 cri.go:89] found id: ""
	I1208 01:54:43.175952 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.175960 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:43.175966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:43.176037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:43.208920 1136586 cri.go:89] found id: ""
	I1208 01:54:43.208946 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.208955 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:43.208961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:43.209024 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:43.235210 1136586 cri.go:89] found id: ""
	I1208 01:54:43.235235 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.235245 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:43.235252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:43.235319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:43.263618 1136586 cri.go:89] found id: ""
	I1208 01:54:43.263642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.263658 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:43.263666 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:43.263727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:43.290748 1136586 cri.go:89] found id: ""
	I1208 01:54:43.290783 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.290792 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:43.290798 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:43.290857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:43.314874 1136586 cri.go:89] found id: ""
	I1208 01:54:43.314898 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.314906 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:43.314913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:43.314975 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:43.339655 1136586 cri.go:89] found id: ""
	I1208 01:54:43.339680 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.339707 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:43.339713 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:43.339777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:43.364203 1136586 cri.go:89] found id: ""
	I1208 01:54:43.364230 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.364240 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:43.364250 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:43.364261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:43.390041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:43.390079 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:43.420626 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:43.420661 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:43.475834 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:43.475876 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.491658 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:43.491696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:43.559609 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.059911 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:46.070737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:46.070825 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:46.110556 1136586 cri.go:89] found id: ""
	I1208 01:54:46.110583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.110593 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:46.110600 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:46.110665 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:46.186917 1136586 cri.go:89] found id: ""
	I1208 01:54:46.186942 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.186951 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:46.186957 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:46.187021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:46.212604 1136586 cri.go:89] found id: ""
	I1208 01:54:46.212631 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.212639 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:46.212646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:46.212724 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:46.239989 1136586 cri.go:89] found id: ""
	I1208 01:54:46.240043 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.240054 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:46.240060 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:46.240217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:46.266799 1136586 cri.go:89] found id: ""
	I1208 01:54:46.266829 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.266839 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:46.266845 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:46.266918 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:46.294724 1136586 cri.go:89] found id: ""
	I1208 01:54:46.294753 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.294762 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:46.294769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:46.294829 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:46.320725 1136586 cri.go:89] found id: ""
	I1208 01:54:46.320754 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.320764 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:46.320771 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:46.320854 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:46.350768 1136586 cri.go:89] found id: ""
	I1208 01:54:46.350792 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.350801 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:46.350810 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:46.350822 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:46.416454 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.416490 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:46.416510 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:46.442082 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:46.442115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:46.474546 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:46.474573 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:46.532104 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:46.532141 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:48.057590 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:48.120301 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:48.120337 1136586 retry.go:31] will retry after 17.544839516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:49.047527 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:49.058154 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.058224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.087906 1136586 cri.go:89] found id: ""
	I1208 01:54:49.087974 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.087999 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.088010 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.088086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.147486 1136586 cri.go:89] found id: ""
	I1208 01:54:49.147562 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.147585 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.147603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.147699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.190637 1136586 cri.go:89] found id: ""
	I1208 01:54:49.190712 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.190735 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.190755 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.190842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.222497 1136586 cri.go:89] found id: ""
	I1208 01:54:49.222525 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.222534 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.222549 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.222624 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.247026 1136586 cri.go:89] found id: ""
	I1208 01:54:49.247052 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.247061 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.247067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.247125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:49.275349 1136586 cri.go:89] found id: ""
	I1208 01:54:49.275378 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.275387 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:49.275394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:49.275499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:49.300792 1136586 cri.go:89] found id: ""
	I1208 01:54:49.300820 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.300829 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:49.300835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:49.300892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:49.325853 1136586 cri.go:89] found id: ""
	I1208 01:54:49.325882 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.325890 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:49.325900 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:49.325912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:49.384418 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:49.384468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:49.399275 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:49.399307 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:49.466718 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:49.466785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:49.466814 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:49.491769 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:49.491803 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:52.023420 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:52.034753 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:52.034828 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:52.064923 1136586 cri.go:89] found id: ""
	I1208 01:54:52.064945 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.064953 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:52.064960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:52.065022 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:52.104945 1136586 cri.go:89] found id: ""
	I1208 01:54:52.104968 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.104977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:52.104983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:52.105043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:52.171374 1136586 cri.go:89] found id: ""
	I1208 01:54:52.171395 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.171404 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:52.171410 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:52.171468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:52.201431 1136586 cri.go:89] found id: ""
	I1208 01:54:52.201476 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.201485 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:52.201492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:52.201563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:52.226892 1136586 cri.go:89] found id: ""
	I1208 01:54:52.226920 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.226929 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:52.226935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:52.227001 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:52.252811 1136586 cri.go:89] found id: ""
	I1208 01:54:52.252891 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.252914 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:52.252935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:52.253034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:52.282156 1136586 cri.go:89] found id: ""
	I1208 01:54:52.282179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.282188 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:52.282195 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:52.282259 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:52.308580 1136586 cri.go:89] found id: ""
	I1208 01:54:52.308607 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.308618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:52.308628 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:52.308639 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:52.364992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:52.365028 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:52.379850 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:52.379877 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:52.445238 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:52.445260 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:52.445273 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:52.471470 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:52.471505 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:55.003548 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:55.026046 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:55.026131 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:55.053887 1136586 cri.go:89] found id: ""
	I1208 01:54:55.053964 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.053989 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:55.054009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:55.054101 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:55.088698 1136586 cri.go:89] found id: ""
	I1208 01:54:55.088724 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.088733 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:55.088760 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:55.088849 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:55.170740 1136586 cri.go:89] found id: ""
	I1208 01:54:55.170776 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.170785 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:55.170791 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:55.170899 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:55.197620 1136586 cri.go:89] found id: ""
	I1208 01:54:55.197656 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.197666 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:55.197690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:55.197776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:55.223553 1136586 cri.go:89] found id: ""
	I1208 01:54:55.223580 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.223589 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:55.223595 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:55.223680 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:55.248608 1136586 cri.go:89] found id: ""
	I1208 01:54:55.248677 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.248692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:55.248699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:55.248765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:55.274165 1136586 cri.go:89] found id: ""
	I1208 01:54:55.274232 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.274254 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:55.274272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:55.274361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:55.300558 1136586 cri.go:89] found id: ""
	I1208 01:54:55.300590 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.300600 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:55.300611 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:55.300622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:55.360386 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:55.360422 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:55.375869 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:55.375899 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:55.447970 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:55.447993 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:55.448005 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:55.473774 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:55.473808 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:56.435194 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:56.498425 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:54:56.498545 1136586 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:54:58.006121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:58.018380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:58.018521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:58.045144 1136586 cri.go:89] found id: ""
	I1208 01:54:58.045180 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.045189 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:58.045211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:58.045296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:58.071125 1136586 cri.go:89] found id: ""
	I1208 01:54:58.071151 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.071160 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:58.071167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:58.071226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:58.121465 1136586 cri.go:89] found id: ""
	I1208 01:54:58.121492 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.121511 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:58.121519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:58.121589 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:58.182249 1136586 cri.go:89] found id: ""
	I1208 01:54:58.182274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.182282 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:58.182288 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:58.182350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:58.211355 1136586 cri.go:89] found id: ""
	I1208 01:54:58.211380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.211389 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:58.211395 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:58.211458 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:58.239234 1136586 cri.go:89] found id: ""
	I1208 01:54:58.239262 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.239271 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:58.239278 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:58.239338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:58.268137 1136586 cri.go:89] found id: ""
	I1208 01:54:58.268212 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.268227 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:58.268235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:58.268311 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:58.298356 1136586 cri.go:89] found id: ""
	I1208 01:54:58.298380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.298389 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:58.298399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:58.298483 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:58.356947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:58.356983 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:58.371448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:58.371475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:58.435566 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:58.435589 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:58.435602 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:58.460122 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:58.460156 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:00.988330 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:00.999374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:00.999446 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:01.036571 1136586 cri.go:89] found id: ""
	I1208 01:55:01.036650 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.036687 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:01.036714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:01.036792 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:01.062231 1136586 cri.go:89] found id: ""
	I1208 01:55:01.062257 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.062267 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:01.062274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:01.062333 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:01.087570 1136586 cri.go:89] found id: ""
	I1208 01:55:01.087592 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.087601 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:01.087608 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:01.087668 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:01.137796 1136586 cri.go:89] found id: ""
	I1208 01:55:01.137822 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.137831 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:01.137838 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:01.137905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:01.193217 1136586 cri.go:89] found id: ""
	I1208 01:55:01.193240 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.193249 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:01.193256 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:01.193322 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:01.225114 1136586 cri.go:89] found id: ""
	I1208 01:55:01.225191 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.225217 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:01.225236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:01.225335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:01.253406 1136586 cri.go:89] found id: ""
	I1208 01:55:01.253485 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.253510 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:01.253529 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:01.253641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:01.279950 1136586 cri.go:89] found id: ""
	I1208 01:55:01.280032 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.280058 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:01.280077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:01.280102 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:01.314699 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:01.314731 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:01.371902 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:01.371941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:01.387482 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:01.387511 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:01.454737 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:01.454761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:01.454775 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:03.982003 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:03.993616 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:03.993689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:04.022115 1136586 cri.go:89] found id: ""
	I1208 01:55:04.022143 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.022152 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:04.022162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:04.022228 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:04.052694 1136586 cri.go:89] found id: ""
	I1208 01:55:04.052720 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.052730 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:04.052737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:04.052799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:04.077702 1136586 cri.go:89] found id: ""
	I1208 01:55:04.077728 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.077737 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:04.077750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:04.077812 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:04.141633 1136586 cri.go:89] found id: ""
	I1208 01:55:04.141668 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.141677 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:04.141683 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:04.141753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:04.188894 1136586 cri.go:89] found id: ""
	I1208 01:55:04.188962 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.188976 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:04.188983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:04.189051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:04.218926 1136586 cri.go:89] found id: ""
	I1208 01:55:04.218951 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.218960 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:04.218966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:04.219028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:04.244759 1136586 cri.go:89] found id: ""
	I1208 01:55:04.244786 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.244795 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:04.244802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:04.244885 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:04.270311 1136586 cri.go:89] found id: ""
	I1208 01:55:04.270337 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.270346 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:04.270377 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:04.270396 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:04.298563 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:04.298594 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:04.357076 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:04.357110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:04.372213 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:04.372255 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:04.437142 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:04.437163 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:04.437176 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:05.665650 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:55:05.727737 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:05.727864 1136586 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:06.963817 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:06.974536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:06.974639 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:06.999437 1136586 cri.go:89] found id: ""
	I1208 01:55:06.999466 1136586 logs.go:282] 0 containers: []
	W1208 01:55:06.999475 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:06.999481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:06.999540 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:07.029225 1136586 cri.go:89] found id: ""
	I1208 01:55:07.029253 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.029262 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:07.029274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:07.029343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:07.058657 1136586 cri.go:89] found id: ""
	I1208 01:55:07.058683 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.058692 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:07.058698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:07.058757 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:07.090130 1136586 cri.go:89] found id: ""
	I1208 01:55:07.090158 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.090168 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:07.090175 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:07.090236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:07.139122 1136586 cri.go:89] found id: ""
	I1208 01:55:07.139177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.139187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:07.139194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:07.139261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:07.172306 1136586 cri.go:89] found id: ""
	I1208 01:55:07.172328 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.172336 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:07.172343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:07.172400 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:07.204660 1136586 cri.go:89] found id: ""
	I1208 01:55:07.204689 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.204698 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:07.204705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:07.204764 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:07.230319 1136586 cri.go:89] found id: ""
	I1208 01:55:07.230349 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.230358 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:07.230368 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:07.230380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:07.285979 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:07.286015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:07.301365 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:07.301391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:07.369069 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:07.369140 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:07.369161 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:07.394018 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:07.394051 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:09.924985 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:09.935805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:09.935908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:09.962622 1136586 cri.go:89] found id: ""
	I1208 01:55:09.962647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.962656 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:09.962662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:09.962729 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:09.988243 1136586 cri.go:89] found id: ""
	I1208 01:55:09.988266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.988275 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:09.988283 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:09.988347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:10.019449 1136586 cri.go:89] found id: ""
	I1208 01:55:10.019482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.019492 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:10.019499 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:10.019570 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:10.048613 1136586 cri.go:89] found id: ""
	I1208 01:55:10.048637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.048646 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:10.048652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:10.048726 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:10.080915 1136586 cri.go:89] found id: ""
	I1208 01:55:10.080940 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.080949 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:10.080956 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:10.081021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:10.144352 1136586 cri.go:89] found id: ""
	I1208 01:55:10.144375 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.144384 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:10.144396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:10.144479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:10.182563 1136586 cri.go:89] found id: ""
	I1208 01:55:10.182586 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.182595 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:10.182601 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:10.182662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:10.213649 1136586 cri.go:89] found id: ""
	I1208 01:55:10.213682 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.213694 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:10.213706 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:10.213724 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:10.242084 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:10.242114 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:10.298146 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:10.298181 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:10.313543 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:10.313574 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:10.380205 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:10.380228 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:10.380248 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:12.905658 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:12.916576 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:12.916648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:12.944122 1136586 cri.go:89] found id: ""
	I1208 01:55:12.944146 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.944155 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:12.944161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:12.944222 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:12.969438 1136586 cri.go:89] found id: ""
	I1208 01:55:12.969464 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.969473 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:12.969481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:12.969542 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:12.997359 1136586 cri.go:89] found id: ""
	I1208 01:55:12.997388 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.997397 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:12.997403 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:12.997470 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:13.025718 1136586 cri.go:89] found id: ""
	I1208 01:55:13.025746 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.025756 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:13.025763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:13.025823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:13.056865 1136586 cri.go:89] found id: ""
	I1208 01:55:13.056892 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.056902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:13.056908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:13.056969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:13.082432 1136586 cri.go:89] found id: ""
	I1208 01:55:13.082528 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.082546 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:13.082554 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:13.082626 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:13.125069 1136586 cri.go:89] found id: ""
	I1208 01:55:13.125144 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.125168 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:13.125187 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:13.125272 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:13.178384 1136586 cri.go:89] found id: ""
	I1208 01:55:13.178482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.178507 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:13.178529 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:13.178567 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:13.239609 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:13.239644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:13.256212 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:13.256240 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:13.323842 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:13.323920 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:13.323949 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:13.348533 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:13.348570 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:15.879223 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:15.890243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:15.890364 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:15.914857 1136586 cri.go:89] found id: ""
	I1208 01:55:15.914886 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.914894 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:15.914901 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:15.914960 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:15.939097 1136586 cri.go:89] found id: ""
	I1208 01:55:15.939123 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.939134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:15.939140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:15.939201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:15.964064 1136586 cri.go:89] found id: ""
	I1208 01:55:15.964088 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.964097 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:15.964103 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:15.964167 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:15.989749 1136586 cri.go:89] found id: ""
	I1208 01:55:15.989789 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.989798 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:15.989805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:15.989864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:16.017523 1136586 cri.go:89] found id: ""
	I1208 01:55:16.017558 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.017567 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:16.017573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:16.017638 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:16.043968 1136586 cri.go:89] found id: ""
	I1208 01:55:16.043996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.044005 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:16.044012 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:16.044077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:16.068942 1136586 cri.go:89] found id: ""
	I1208 01:55:16.069012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.069038 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:16.069057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:16.069149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:16.110088 1136586 cri.go:89] found id: ""
	I1208 01:55:16.110117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.110127 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:16.110136 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:16.110147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:16.194161 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:16.194206 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:16.209083 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:16.209108 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:16.278327 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:16.278346 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:16.278361 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:16.304026 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:16.304059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:18.833542 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:18.844944 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:18.845029 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:18.871187 1136586 cri.go:89] found id: ""
	I1208 01:55:18.871210 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.871220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:18.871226 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:18.871287 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:18.899377 1136586 cri.go:89] found id: ""
	I1208 01:55:18.899399 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.899407 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:18.899413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:18.899473 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:18.924554 1136586 cri.go:89] found id: ""
	I1208 01:55:18.924578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.924587 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:18.924593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:18.924653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:18.949910 1136586 cri.go:89] found id: ""
	I1208 01:55:18.949932 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.949941 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:18.949947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:18.950008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:18.974978 1136586 cri.go:89] found id: ""
	I1208 01:55:18.975001 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.975009 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:18.975015 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:18.975074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:19.005380 1136586 cri.go:89] found id: ""
	I1208 01:55:19.005411 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.005421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:19.005429 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:19.005503 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:19.032668 1136586 cri.go:89] found id: ""
	I1208 01:55:19.032750 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.032765 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:19.032780 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:19.032843 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:19.059531 1136586 cri.go:89] found id: ""
	I1208 01:55:19.059562 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.059572 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:19.059602 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:19.059619 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:19.121579 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:19.121613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:19.138076 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:19.138103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:19.222963 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:19.222987 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:19.223000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:19.253325 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:19.253368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:19.388285 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:55:19.459805 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:19.459968 1136586 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:19.463177 1136586 out.go:179] * Enabled addons: 
	I1208 01:55:19.465938 1136586 addons.go:530] duration metric: took 1m45.131432136s for enable addons: enabled=[]
	I1208 01:55:21.781716 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:21.792431 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:21.792512 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:21.819119 1136586 cri.go:89] found id: ""
	I1208 01:55:21.819147 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.819157 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:21.819164 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:21.819230 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:21.848715 1136586 cri.go:89] found id: ""
	I1208 01:55:21.848751 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.848760 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:21.848767 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:21.848826 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:21.873926 1136586 cri.go:89] found id: ""
	I1208 01:55:21.873952 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.873961 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:21.873968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:21.874028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:21.900968 1136586 cri.go:89] found id: ""
	I1208 01:55:21.900995 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.901005 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:21.901011 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:21.901071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:21.929497 1136586 cri.go:89] found id: ""
	I1208 01:55:21.929524 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.929533 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:21.929540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:21.929600 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:21.954914 1136586 cri.go:89] found id: ""
	I1208 01:55:21.954936 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.954951 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:21.954959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:21.955020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:21.985551 1136586 cri.go:89] found id: ""
	I1208 01:55:21.985578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.985586 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:21.985593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:21.985656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:22.016148 1136586 cri.go:89] found id: ""
	I1208 01:55:22.016222 1136586 logs.go:282] 0 containers: []
	W1208 01:55:22.016244 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:22.016266 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:22.016305 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:22.049513 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:22.049585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:22.109605 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:22.109713 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:22.126061 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:22.126134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:22.225148 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:22.225170 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:22.225183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:24.750628 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:24.761806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:24.761883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:24.787831 1136586 cri.go:89] found id: ""
	I1208 01:55:24.787855 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.787864 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:24.787871 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:24.787931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:24.816489 1136586 cri.go:89] found id: ""
	I1208 01:55:24.816516 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.816526 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:24.816533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:24.816631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:24.843224 1136586 cri.go:89] found id: ""
	I1208 01:55:24.843247 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.843256 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:24.843262 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:24.843324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:24.869163 1136586 cri.go:89] found id: ""
	I1208 01:55:24.869186 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.869195 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:24.869202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:24.869261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:24.896657 1136586 cri.go:89] found id: ""
	I1208 01:55:24.896685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.896695 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:24.896701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:24.896763 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:24.924888 1136586 cri.go:89] found id: ""
	I1208 01:55:24.924918 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.924927 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:24.924934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:24.924999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:24.951093 1136586 cri.go:89] found id: ""
	I1208 01:55:24.951117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.951126 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:24.951133 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:24.951196 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:24.980609 1136586 cri.go:89] found id: ""
	I1208 01:55:24.980633 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.980642 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:24.980651 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:24.980662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:25.036369 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:25.036404 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:25.057565 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:25.057647 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:25.200105 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:25.200136 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:25.200151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:25.227358 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:25.227398 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:27.756955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:27.767899 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:27.767972 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:27.795426 1136586 cri.go:89] found id: ""
	I1208 01:55:27.795451 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.795460 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:27.795466 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:27.795529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:27.821100 1136586 cri.go:89] found id: ""
	I1208 01:55:27.821127 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.821137 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:27.821143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:27.821213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:27.851486 1136586 cri.go:89] found id: ""
	I1208 01:55:27.851509 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.851518 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:27.851524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:27.851583 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:27.881644 1136586 cri.go:89] found id: ""
	I1208 01:55:27.881665 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.881673 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:27.881681 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:27.881739 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:27.911149 1136586 cri.go:89] found id: ""
	I1208 01:55:27.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.911185 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:27.911191 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:27.911296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:27.935972 1136586 cri.go:89] found id: ""
	I1208 01:55:27.936042 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.936069 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:27.936084 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:27.936158 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:27.961735 1136586 cri.go:89] found id: ""
	I1208 01:55:27.961762 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.961772 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:27.961778 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:27.961845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:27.987428 1136586 cri.go:89] found id: ""
	I1208 01:55:27.987452 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.987461 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:27.987471 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:27.987482 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:28.018603 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:28.018646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:28.051322 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:28.051395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:28.116115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:28.116154 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:28.140270 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:28.140297 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:28.224200 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:30.725898 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:30.736353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:30.736438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:30.764621 1136586 cri.go:89] found id: ""
	I1208 01:55:30.764647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.764667 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:30.764691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:30.764772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:30.790477 1136586 cri.go:89] found id: ""
	I1208 01:55:30.790502 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.790510 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:30.790516 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:30.790577 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:30.816436 1136586 cri.go:89] found id: ""
	I1208 01:55:30.816522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.816539 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:30.816547 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:30.816625 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:30.845918 1136586 cri.go:89] found id: ""
	I1208 01:55:30.845944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.845953 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:30.845960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:30.846020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:30.870263 1136586 cri.go:89] found id: ""
	I1208 01:55:30.870307 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.870317 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:30.870323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:30.870388 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:30.896013 1136586 cri.go:89] found id: ""
	I1208 01:55:30.896041 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.896049 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:30.896057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:30.896174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:30.921585 1136586 cri.go:89] found id: ""
	I1208 01:55:30.921612 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.921621 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:30.921628 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:30.921689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:30.951330 1136586 cri.go:89] found id: ""
	I1208 01:55:30.951355 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.951365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:30.951374 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:30.951391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:30.977110 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:30.977151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:31.009469 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:31.009525 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:31.071586 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:31.071635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:31.087881 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:31.087927 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:31.188603 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:33.688896 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:33.699658 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:33.699730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:33.723918 1136586 cri.go:89] found id: ""
	I1208 01:55:33.723944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.723952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:33.723959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:33.724017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:33.748249 1136586 cri.go:89] found id: ""
	I1208 01:55:33.748272 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.748281 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:33.748287 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:33.748361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:33.774082 1136586 cri.go:89] found id: ""
	I1208 01:55:33.774165 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.774188 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:33.774208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:33.774300 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:33.804783 1136586 cri.go:89] found id: ""
	I1208 01:55:33.804808 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.804817 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:33.804824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:33.804883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:33.830537 1136586 cri.go:89] found id: ""
	I1208 01:55:33.830568 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.830578 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:33.830584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:33.830645 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:33.855676 1136586 cri.go:89] found id: ""
	I1208 01:55:33.855702 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.855711 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:33.855719 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:33.855788 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:33.881829 1136586 cri.go:89] found id: ""
	I1208 01:55:33.881907 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.881943 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:33.881968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:33.882061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:33.911849 1136586 cri.go:89] found id: ""
	I1208 01:55:33.911872 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.911880 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:33.911925 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:33.911937 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:33.939161 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:33.939188 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:33.997922 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:33.997962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:34.019097 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:34.019129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:34.086047 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:34.086070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:34.086081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:36.616392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:36.627074 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:36.627155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:36.655354 1136586 cri.go:89] found id: ""
	I1208 01:55:36.655378 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.655545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:36.655552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:36.655616 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:36.684592 1136586 cri.go:89] found id: ""
	I1208 01:55:36.684615 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.684623 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:36.684629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:36.684693 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:36.715198 1136586 cri.go:89] found id: ""
	I1208 01:55:36.715224 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.715233 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:36.715240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:36.715304 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:36.744302 1136586 cri.go:89] found id: ""
	I1208 01:55:36.744327 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.744337 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:36.744343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:36.744405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:36.769612 1136586 cri.go:89] found id: ""
	I1208 01:55:36.769637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.769646 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:36.769652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:36.769712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:36.796116 1136586 cri.go:89] found id: ""
	I1208 01:55:36.796138 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.796147 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:36.796153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:36.796212 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:36.824398 1136586 cri.go:89] found id: ""
	I1208 01:55:36.824424 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.824433 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:36.824439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:36.824543 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:36.849915 1136586 cri.go:89] found id: ""
	I1208 01:55:36.849942 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.849951 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:36.849960 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:36.849972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:36.904949 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:36.904986 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:36.919890 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:36.919919 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:36.983074 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:36.983095 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:36.983111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:37.008505 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:37.008605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.548042 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:39.558613 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:39.558684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:39.582845 1136586 cri.go:89] found id: ""
	I1208 01:55:39.582870 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.582878 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:39.582885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:39.582946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:39.607991 1136586 cri.go:89] found id: ""
	I1208 01:55:39.608016 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.608025 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:39.608032 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:39.608094 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:39.633661 1136586 cri.go:89] found id: ""
	I1208 01:55:39.633685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.633694 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:39.633701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:39.633765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:39.658962 1136586 cri.go:89] found id: ""
	I1208 01:55:39.658989 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.658998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:39.659005 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:39.659064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:39.684407 1136586 cri.go:89] found id: ""
	I1208 01:55:39.684490 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.684514 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:39.684534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:39.684622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:39.715084 1136586 cri.go:89] found id: ""
	I1208 01:55:39.715109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.715118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:39.715125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:39.715191 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:39.740328 1136586 cri.go:89] found id: ""
	I1208 01:55:39.740352 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.740361 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:39.740368 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:39.740457 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:39.771393 1136586 cri.go:89] found id: ""
	I1208 01:55:39.771420 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.771429 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:39.771438 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:39.771450 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:39.797255 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:39.797291 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.826926 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:39.826954 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:39.882889 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:39.882925 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:39.898019 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:39.898048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:39.963174 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.463393 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:42.473927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:42.474000 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:42.499722 1136586 cri.go:89] found id: ""
	I1208 01:55:42.499747 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.499757 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:42.499764 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:42.499842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:42.525555 1136586 cri.go:89] found id: ""
	I1208 01:55:42.525637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.525664 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:42.525671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:42.525745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:42.551105 1136586 cri.go:89] found id: ""
	I1208 01:55:42.551135 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.551144 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:42.551156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:42.551217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:42.576427 1136586 cri.go:89] found id: ""
	I1208 01:55:42.576500 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.576515 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:42.576522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:42.576587 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:42.606069 1136586 cri.go:89] found id: ""
	I1208 01:55:42.606102 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.606111 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:42.606118 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:42.606190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:42.631166 1136586 cri.go:89] found id: ""
	I1208 01:55:42.631193 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.631202 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:42.631208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:42.631267 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:42.655160 1136586 cri.go:89] found id: ""
	I1208 01:55:42.655238 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.655255 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:42.655266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:42.655329 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:42.680010 1136586 cri.go:89] found id: ""
	I1208 01:55:42.680085 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.680100 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:42.680111 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:42.680124 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:42.695151 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:42.695175 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:42.763022 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.763046 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:42.763059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:42.788301 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:42.788337 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:42.823956 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:42.823981 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.380090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:45.395413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:45.395485 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:45.439897 1136586 cri.go:89] found id: ""
	I1208 01:55:45.439925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.439935 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:45.439942 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:45.440007 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:45.465988 1136586 cri.go:89] found id: ""
	I1208 01:55:45.466012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.466020 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:45.466027 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:45.466099 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:45.491807 1136586 cri.go:89] found id: ""
	I1208 01:55:45.491834 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.491843 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:45.491850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:45.491913 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:45.516818 1136586 cri.go:89] found id: ""
	I1208 01:55:45.516843 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.516854 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:45.516861 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:45.516921 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:45.542497 1136586 cri.go:89] found id: ""
	I1208 01:55:45.542522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.542531 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:45.542538 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:45.542609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:45.568083 1136586 cri.go:89] found id: ""
	I1208 01:55:45.568109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.568118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:45.568125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:45.568183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:45.593517 1136586 cri.go:89] found id: ""
	I1208 01:55:45.593544 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.593554 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:45.593561 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:45.593674 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:45.618329 1136586 cri.go:89] found id: ""
	I1208 01:55:45.618356 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.618366 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:45.618375 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:45.618387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:45.682426 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:45.682475 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:45.682489 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:45.708017 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:45.708054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:45.737945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:45.737975 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.793795 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:45.793830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.309212 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:48.320148 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:48.320220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:48.367705 1136586 cri.go:89] found id: ""
	I1208 01:55:48.367730 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.367739 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:48.367745 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:48.367804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:48.421729 1136586 cri.go:89] found id: ""
	I1208 01:55:48.421754 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.421763 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:48.421769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:48.421827 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:48.447771 1136586 cri.go:89] found id: ""
	I1208 01:55:48.447795 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.447804 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:48.447810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:48.447869 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:48.473161 1136586 cri.go:89] found id: ""
	I1208 01:55:48.473187 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.473196 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:48.473203 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:48.473265 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:48.498698 1136586 cri.go:89] found id: ""
	I1208 01:55:48.498723 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.498732 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:48.498738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:48.498798 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:48.527882 1136586 cri.go:89] found id: ""
	I1208 01:55:48.527908 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.527918 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:48.527925 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:48.528028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:48.554285 1136586 cri.go:89] found id: ""
	I1208 01:55:48.554311 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.554319 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:48.554326 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:48.554385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:48.580502 1136586 cri.go:89] found id: ""
	I1208 01:55:48.580529 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.580538 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:48.580548 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:48.580580 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:48.610294 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:48.610319 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:48.665141 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:48.665179 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.682234 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:48.682262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:48.759351 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:48.759375 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:48.759387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:51.285923 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:51.298330 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:51.298405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:51.324185 1136586 cri.go:89] found id: ""
	I1208 01:55:51.324212 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.324220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:51.324227 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:51.324289 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:51.373377 1136586 cri.go:89] found id: ""
	I1208 01:55:51.373405 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.373414 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:51.373421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:51.373482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:51.433499 1136586 cri.go:89] found id: ""
	I1208 01:55:51.433522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.433531 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:51.433537 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:51.433595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:51.458517 1136586 cri.go:89] found id: ""
	I1208 01:55:51.458543 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.458552 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:51.458558 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:51.458622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:51.488348 1136586 cri.go:89] found id: ""
	I1208 01:55:51.488373 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.488382 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:51.488389 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:51.488471 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:51.513083 1136586 cri.go:89] found id: ""
	I1208 01:55:51.513109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.513119 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:51.513126 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:51.513190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:51.537741 1136586 cri.go:89] found id: ""
	I1208 01:55:51.537785 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.537804 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:51.537811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:51.537886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:51.563745 1136586 cri.go:89] found id: ""
	I1208 01:55:51.563769 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.563777 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:51.563786 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:51.563797 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:51.594103 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:51.594137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:51.650065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:51.650099 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:51.665199 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:51.665275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:51.732191 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:51.732221 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:51.732235 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.259222 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:54.271505 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:54.271585 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:54.300828 1136586 cri.go:89] found id: ""
	I1208 01:55:54.300860 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.300869 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:54.300875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:54.300944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:54.326203 1136586 cri.go:89] found id: ""
	I1208 01:55:54.326235 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.326245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:54.326251 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:54.326319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:54.392508 1136586 cri.go:89] found id: ""
	I1208 01:55:54.392537 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.392557 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:54.392564 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:54.392631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:54.443370 1136586 cri.go:89] found id: ""
	I1208 01:55:54.443403 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.443413 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:54.443419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:54.443479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:54.471931 1136586 cri.go:89] found id: ""
	I1208 01:55:54.471996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.472011 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:54.472018 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:54.472080 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:54.497863 1136586 cri.go:89] found id: ""
	I1208 01:55:54.497888 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.497897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:54.497905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:54.497966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:54.522372 1136586 cri.go:89] found id: ""
	I1208 01:55:54.522398 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.522408 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:54.522415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:54.522500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:54.549239 1136586 cri.go:89] found id: ""
	I1208 01:55:54.549266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.549275 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:54.549284 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:54.549316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:54.612864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:54.612887 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:54.612900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.639721 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:54.639758 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:54.671819 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:54.671845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:54.734691 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:54.734736 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.251176 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:57.261934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:57.262008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:57.287436 1136586 cri.go:89] found id: ""
	I1208 01:55:57.287460 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.287469 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:57.287476 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:57.287538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:57.313930 1136586 cri.go:89] found id: ""
	I1208 01:55:57.313953 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.313962 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:57.313968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:57.314028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:57.340222 1136586 cri.go:89] found id: ""
	I1208 01:55:57.340245 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.340254 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:57.340260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:57.340321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:57.380005 1136586 cri.go:89] found id: ""
	I1208 01:55:57.380028 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.380037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:57.380044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:57.380111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:57.421841 1136586 cri.go:89] found id: ""
	I1208 01:55:57.421863 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.421871 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:57.421877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:57.421935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:57.456549 1136586 cri.go:89] found id: ""
	I1208 01:55:57.456579 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.456588 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:57.456594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:57.456656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:57.480374 1136586 cri.go:89] found id: ""
	I1208 01:55:57.480472 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.480487 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:57.480494 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:57.480567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:57.504897 1136586 cri.go:89] found id: ""
	I1208 01:55:57.504925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.504935 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:57.504944 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:57.504955 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:57.530334 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:57.530377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:57.561764 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:57.561791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:57.620753 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:57.620788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.636064 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:57.636155 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:57.701326 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.203093 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:00.255847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:00.255935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:00.303978 1136586 cri.go:89] found id: ""
	I1208 01:56:00.304070 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.304095 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:00.304117 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:00.304214 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:00.413194 1136586 cri.go:89] found id: ""
	I1208 01:56:00.413283 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.413307 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:00.413328 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:00.413451 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:00.536345 1136586 cri.go:89] found id: ""
	I1208 01:56:00.536426 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.536462 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:00.536495 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:00.536582 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:00.570659 1136586 cri.go:89] found id: ""
	I1208 01:56:00.570746 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.570873 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:00.570915 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:00.571047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:00.600506 1136586 cri.go:89] found id: ""
	I1208 01:56:00.600542 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.600552 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:00.600559 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:00.600627 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:00.628998 1136586 cri.go:89] found id: ""
	I1208 01:56:00.629028 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.629037 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:00.629045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:00.629113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:00.655017 1136586 cri.go:89] found id: ""
	I1208 01:56:00.655055 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.655066 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:00.655073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:00.655136 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:00.687531 1136586 cri.go:89] found id: ""
	I1208 01:56:00.687555 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.687589 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:00.687601 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:00.687621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:00.716787 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:00.716826 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:00.773133 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:00.773171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:00.788167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:00.788194 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:00.851515 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.851539 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:00.851553 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.378410 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:03.388811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:03.388882 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:03.416483 1136586 cri.go:89] found id: ""
	I1208 01:56:03.416508 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.416517 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:03.416523 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:03.416584 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:03.444854 1136586 cri.go:89] found id: ""
	I1208 01:56:03.444879 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.444889 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:03.444896 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:03.444957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:03.471069 1136586 cri.go:89] found id: ""
	I1208 01:56:03.471096 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.471106 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:03.471113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:03.471174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:03.497488 1136586 cri.go:89] found id: ""
	I1208 01:56:03.497516 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.497525 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:03.497532 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:03.497592 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:03.523459 1136586 cri.go:89] found id: ""
	I1208 01:56:03.523485 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.523494 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:03.523501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:03.523564 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:03.553004 1136586 cri.go:89] found id: ""
	I1208 01:56:03.553030 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.553038 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:03.553045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:03.553104 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:03.582299 1136586 cri.go:89] found id: ""
	I1208 01:56:03.582325 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.582334 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:03.582340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:03.582398 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:03.628970 1136586 cri.go:89] found id: ""
	I1208 01:56:03.629036 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.629057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:03.629078 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:03.629116 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:03.693550 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:03.693861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:03.725106 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:03.725132 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:03.797949 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:03.797973 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:03.797985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.822975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:03.823012 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:06.351834 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:06.362738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:06.362832 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:06.388195 1136586 cri.go:89] found id: ""
	I1208 01:56:06.388222 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.388231 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:06.388238 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:06.388305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:06.413430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.413536 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.413559 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:06.413580 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:06.413657 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:06.438706 1136586 cri.go:89] found id: ""
	I1208 01:56:06.438770 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.438794 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:06.438813 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:06.438893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:06.463796 1136586 cri.go:89] found id: ""
	I1208 01:56:06.463860 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.463883 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:06.463902 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:06.463979 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:06.493653 1136586 cri.go:89] found id: ""
	I1208 01:56:06.493719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.493743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:06.493761 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:06.493839 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:06.518393 1136586 cri.go:89] found id: ""
	I1208 01:56:06.518490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.518516 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:06.518540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:06.518628 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:06.547357 1136586 cri.go:89] found id: ""
	I1208 01:56:06.547423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.547444 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:06.547464 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:06.547537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:06.572430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.572460 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.572469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:06.572479 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:06.572520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:06.631771 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:06.631805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:06.648910 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:06.648992 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:06.719373 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:06.719447 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:06.719474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:06.744508 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:06.744540 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:09.275604 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:09.286432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:09.286521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:09.312708 1136586 cri.go:89] found id: ""
	I1208 01:56:09.312733 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.312742 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:09.312749 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:09.312809 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:09.341427 1136586 cri.go:89] found id: ""
	I1208 01:56:09.341452 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.341461 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:09.341468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:09.341533 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:09.364765 1136586 cri.go:89] found id: ""
	I1208 01:56:09.364791 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.364801 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:09.364808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:09.364871 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:09.390922 1136586 cri.go:89] found id: ""
	I1208 01:56:09.390950 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.390959 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:09.390965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:09.391027 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:09.415255 1136586 cri.go:89] found id: ""
	I1208 01:56:09.415279 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.415288 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:09.415294 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:09.415351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:09.443874 1136586 cri.go:89] found id: ""
	I1208 01:56:09.443898 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.443907 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:09.443913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:09.443973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:09.473821 1136586 cri.go:89] found id: ""
	I1208 01:56:09.473846 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.473855 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:09.473862 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:09.473920 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:09.502023 1136586 cri.go:89] found id: ""
	I1208 01:56:09.502048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.502057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:09.502066 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:09.502077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:09.557585 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:09.557621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:09.572644 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:09.572673 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:09.660866 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:09.660889 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:09.660902 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:09.687200 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:09.687238 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:12.215648 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:12.227315 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:12.227391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:12.254343 1136586 cri.go:89] found id: ""
	I1208 01:56:12.254369 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.254378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:12.254385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:12.254467 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:12.279481 1136586 cri.go:89] found id: ""
	I1208 01:56:12.279550 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.279574 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:12.279594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:12.279683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:12.305844 1136586 cri.go:89] found id: ""
	I1208 01:56:12.305910 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.305933 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:12.305951 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:12.306041 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:12.330060 1136586 cri.go:89] found id: ""
	I1208 01:56:12.330139 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.330162 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:12.330181 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:12.330273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:12.356745 1136586 cri.go:89] found id: ""
	I1208 01:56:12.356813 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.356840 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:12.356858 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:12.356943 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:12.386368 1136586 cri.go:89] found id: ""
	I1208 01:56:12.386475 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.386492 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:12.386500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:12.386563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:12.412659 1136586 cri.go:89] found id: ""
	I1208 01:56:12.412685 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.412694 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:12.412700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:12.412779 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:12.440569 1136586 cri.go:89] found id: ""
	I1208 01:56:12.440596 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.440604 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:12.440615 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:12.440626 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:12.496637 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:12.496674 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:12.511594 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:12.511624 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:12.580748 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:12.580771 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:12.580784 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:12.613723 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:12.613802 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.152673 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:15.163614 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:15.163688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:15.192414 1136586 cri.go:89] found id: ""
	I1208 01:56:15.192449 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.192458 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:15.192465 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:15.192537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:15.219157 1136586 cri.go:89] found id: ""
	I1208 01:56:15.219182 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.219191 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:15.219198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:15.219258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:15.244756 1136586 cri.go:89] found id: ""
	I1208 01:56:15.244824 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.244839 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:15.244846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:15.244907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:15.271473 1136586 cri.go:89] found id: ""
	I1208 01:56:15.271546 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.271562 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:15.271569 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:15.271637 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:15.297385 1136586 cri.go:89] found id: ""
	I1208 01:56:15.297411 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.297430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:15.297437 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:15.297506 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:15.323057 1136586 cri.go:89] found id: ""
	I1208 01:56:15.323127 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.323149 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:15.323158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:15.323226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:15.348696 1136586 cri.go:89] found id: ""
	I1208 01:56:15.348771 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.348788 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:15.348795 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:15.348857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:15.373461 1136586 cri.go:89] found id: ""
	I1208 01:56:15.373483 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.373491 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:15.373500 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:15.373512 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.403816 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:15.403845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:15.463833 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:15.463875 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:15.479494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:15.479522 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:15.551161 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:15.551185 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:15.551199 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.077116 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:18.087881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:18.087956 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:18.116452 1136586 cri.go:89] found id: ""
	I1208 01:56:18.116480 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.116490 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:18.116497 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:18.116558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:18.147311 1136586 cri.go:89] found id: ""
	I1208 01:56:18.147339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.147347 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:18.147353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:18.147415 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:18.173654 1136586 cri.go:89] found id: ""
	I1208 01:56:18.173680 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.173689 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:18.173695 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:18.173754 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:18.198118 1136586 cri.go:89] found id: ""
	I1208 01:56:18.198142 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.198151 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:18.198158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:18.198220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:18.229347 1136586 cri.go:89] found id: ""
	I1208 01:56:18.229371 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.229379 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:18.229385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:18.229443 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:18.253505 1136586 cri.go:89] found id: ""
	I1208 01:56:18.253528 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.253536 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:18.253542 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:18.253601 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:18.279471 1136586 cri.go:89] found id: ""
	I1208 01:56:18.279496 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.279506 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:18.279513 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:18.279571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:18.309796 1136586 cri.go:89] found id: ""
	I1208 01:56:18.309819 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.309827 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:18.309839 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:18.309850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:18.366744 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:18.366779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:18.381719 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:18.381749 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:18.448045 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:18.448070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:18.448082 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.473293 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:18.473332 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.004404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:21.017333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:21.017424 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:21.042756 1136586 cri.go:89] found id: ""
	I1208 01:56:21.042823 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.042839 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:21.042847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:21.042907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:21.068017 1136586 cri.go:89] found id: ""
	I1208 01:56:21.068042 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.068051 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:21.068057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:21.068134 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:21.095695 1136586 cri.go:89] found id: ""
	I1208 01:56:21.095719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.095729 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:21.095735 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:21.095833 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:21.126473 1136586 cri.go:89] found id: ""
	I1208 01:56:21.126499 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.126508 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:21.126515 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:21.126578 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:21.159320 1136586 cri.go:89] found id: ""
	I1208 01:56:21.159344 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.159354 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:21.159360 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:21.159421 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:21.189716 1136586 cri.go:89] found id: ""
	I1208 01:56:21.189740 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.189790 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:21.189808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:21.189875 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:21.215065 1136586 cri.go:89] found id: ""
	I1208 01:56:21.215090 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.215099 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:21.215105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:21.215186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:21.239527 1136586 cri.go:89] found id: ""
	I1208 01:56:21.239551 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.239559 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:21.239568 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:21.239581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:21.303585 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:21.303607 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:21.303622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:21.329232 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:21.329269 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.357399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:21.357429 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:21.413905 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:21.413941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:23.930606 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:23.941524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:23.941609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:23.969400 1136586 cri.go:89] found id: ""
	I1208 01:56:23.969431 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.969441 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:23.969447 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:23.969510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:23.999105 1136586 cri.go:89] found id: ""
	I1208 01:56:23.999131 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.999140 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:23.999147 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:23.999216 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:24.031489 1136586 cri.go:89] found id: ""
	I1208 01:56:24.031517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.031527 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:24.031533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:24.031598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:24.057876 1136586 cri.go:89] found id: ""
	I1208 01:56:24.057902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.057911 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:24.057917 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:24.057978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:24.092220 1136586 cri.go:89] found id: ""
	I1208 01:56:24.092247 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.092257 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:24.092263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:24.092324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:24.125261 1136586 cri.go:89] found id: ""
	I1208 01:56:24.125289 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.125298 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:24.125306 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:24.125367 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:24.153744 1136586 cri.go:89] found id: ""
	I1208 01:56:24.153772 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.153782 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:24.153789 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:24.153852 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:24.179839 1136586 cri.go:89] found id: ""
	I1208 01:56:24.179866 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.179875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:24.179884 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:24.179916 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:24.237479 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:24.237514 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:24.252654 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:24.252693 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:24.325211 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:24.325232 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:24.325244 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:24.351049 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:24.351084 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:26.879645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:26.891936 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:26.892009 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:26.916974 1136586 cri.go:89] found id: ""
	I1208 01:56:26.916998 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.917007 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:26.917013 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:26.917072 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:26.941861 1136586 cri.go:89] found id: ""
	I1208 01:56:26.941885 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.941894 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:26.941900 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:26.941963 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:26.974560 1136586 cri.go:89] found id: ""
	I1208 01:56:26.974587 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.974596 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:26.974602 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:26.974663 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:26.999892 1136586 cri.go:89] found id: ""
	I1208 01:56:26.999921 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.999930 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:26.999937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:27.000021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:27.030397 1136586 cri.go:89] found id: ""
	I1208 01:56:27.030421 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.030430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:27.030436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:27.030521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:27.059896 1136586 cri.go:89] found id: ""
	I1208 01:56:27.059923 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.059932 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:27.059941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:27.059999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:27.084629 1136586 cri.go:89] found id: ""
	I1208 01:56:27.084656 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.084665 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:27.084671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:27.084733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:27.119162 1136586 cri.go:89] found id: ""
	I1208 01:56:27.119185 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.119193 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:27.119202 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:27.119213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:27.179450 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:27.179487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:27.194459 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:27.194486 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:27.261775 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:27.261797 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:27.261810 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:27.287303 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:27.287338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:29.820302 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:29.830851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:29.830917 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:29.869685 1136586 cri.go:89] found id: ""
	I1208 01:56:29.869717 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.869726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:29.869733 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:29.869789 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:29.904021 1136586 cri.go:89] found id: ""
	I1208 01:56:29.904048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.904057 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:29.904063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:29.904122 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:29.929826 1136586 cri.go:89] found id: ""
	I1208 01:56:29.929854 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.929864 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:29.929870 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:29.929935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:29.954915 1136586 cri.go:89] found id: ""
	I1208 01:56:29.954939 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.954947 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:29.954954 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:29.955013 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:29.980194 1136586 cri.go:89] found id: ""
	I1208 01:56:29.980218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.980227 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:29.980233 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:29.980296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:30.034520 1136586 cri.go:89] found id: ""
	I1208 01:56:30.034556 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.034566 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:30.034573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:30.034648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:30.069395 1136586 cri.go:89] found id: ""
	I1208 01:56:30.069422 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.069432 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:30.069439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:30.069507 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:30.109430 1136586 cri.go:89] found id: ""
	I1208 01:56:30.109459 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.109469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:30.109479 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:30.109491 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:30.146595 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:30.146631 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:30.206376 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:30.206419 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:30.225510 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:30.225621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:30.296464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:30.296484 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:30.296497 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:32.823121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:32.833454 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:32.833529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:32.868695 1136586 cri.go:89] found id: ""
	I1208 01:56:32.868721 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.868740 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:32.868747 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:32.868821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:32.906232 1136586 cri.go:89] found id: ""
	I1208 01:56:32.906253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.906261 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:32.906267 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:32.906327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:32.932154 1136586 cri.go:89] found id: ""
	I1208 01:56:32.932181 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.932190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:32.932200 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:32.932262 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:32.957782 1136586 cri.go:89] found id: ""
	I1208 01:56:32.957805 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.957814 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:32.957821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:32.957886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:32.983951 1136586 cri.go:89] found id: ""
	I1208 01:56:32.983978 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.983988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:32.983995 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:32.984057 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:33.011290 1136586 cri.go:89] found id: ""
	I1208 01:56:33.011316 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.011325 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:33.011340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:33.011410 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:33.038703 1136586 cri.go:89] found id: ""
	I1208 01:56:33.038726 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.038735 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:33.038741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:33.038799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:33.063041 1136586 cri.go:89] found id: ""
	I1208 01:56:33.063065 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.063074 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:33.063084 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:33.063115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:33.078006 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:33.078036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:33.170567 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:33.170591 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:33.170607 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:33.196077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:33.196111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:33.227121 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:33.227152 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:35.783290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:35.793700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:35.793778 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:35.821903 1136586 cri.go:89] found id: ""
	I1208 01:56:35.821937 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.821946 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:35.821953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:35.822014 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:35.854878 1136586 cri.go:89] found id: ""
	I1208 01:56:35.854902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.854910 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:35.854916 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:35.854978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:35.881395 1136586 cri.go:89] found id: ""
	I1208 01:56:35.881418 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.881426 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:35.881432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:35.881490 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:35.910658 1136586 cri.go:89] found id: ""
	I1208 01:56:35.910679 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.910688 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:35.910694 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:35.910753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:35.939089 1136586 cri.go:89] found id: ""
	I1208 01:56:35.939114 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.939129 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:35.939137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:35.939199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:35.964135 1136586 cri.go:89] found id: ""
	I1208 01:56:35.964158 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.964166 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:35.964173 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:35.964235 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:35.990669 1136586 cri.go:89] found id: ""
	I1208 01:56:35.990692 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.990701 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:35.990707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:35.990770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:36.020165 1136586 cri.go:89] found id: ""
	I1208 01:56:36.020191 1136586 logs.go:282] 0 containers: []
	W1208 01:56:36.020207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:36.020217 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:36.020228 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:36.076411 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:36.076452 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:36.093602 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:36.093683 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:36.181516 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:36.181540 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:36.181552 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:36.207107 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:36.207142 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:38.735690 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:38.746691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:38.746767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:38.773309 1136586 cri.go:89] found id: ""
	I1208 01:56:38.773339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.773349 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:38.773356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:38.773423 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:38.801208 1136586 cri.go:89] found id: ""
	I1208 01:56:38.801235 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.801245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:38.801254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:38.801317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:38.826539 1136586 cri.go:89] found id: ""
	I1208 01:56:38.826566 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.826575 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:38.826582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:38.826642 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:38.865488 1136586 cri.go:89] found id: ""
	I1208 01:56:38.865517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.865527 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:38.865533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:38.865594 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:38.900627 1136586 cri.go:89] found id: ""
	I1208 01:56:38.900655 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.900664 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:38.900670 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:38.900733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:38.927847 1136586 cri.go:89] found id: ""
	I1208 01:56:38.927871 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.927880 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:38.927887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:38.927949 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:38.952594 1136586 cri.go:89] found id: ""
	I1208 01:56:38.952666 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.952689 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:38.952714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:38.952803 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:38.978089 1136586 cri.go:89] found id: ""
	I1208 01:56:38.978116 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.978125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:38.978134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:38.978147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:39.047378 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:39.047401 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:39.047414 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:39.073359 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:39.073402 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:39.112761 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:39.112796 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:39.176177 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:39.176214 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:41.692238 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:41.702585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:41.702656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:41.726879 1136586 cri.go:89] found id: ""
	I1208 01:56:41.726913 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.726923 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:41.726930 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:41.726996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:41.752119 1136586 cri.go:89] found id: ""
	I1208 01:56:41.752143 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.752152 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:41.752158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:41.752215 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:41.777446 1136586 cri.go:89] found id: ""
	I1208 01:56:41.777473 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.777482 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:41.777488 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:41.777548 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:41.804077 1136586 cri.go:89] found id: ""
	I1208 01:56:41.804103 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.804112 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:41.804119 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:41.804179 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:41.828883 1136586 cri.go:89] found id: ""
	I1208 01:56:41.828908 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.828917 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:41.828924 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:41.828987 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:41.875100 1136586 cri.go:89] found id: ""
	I1208 01:56:41.875128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.875138 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:41.875145 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:41.875204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:41.907099 1136586 cri.go:89] found id: ""
	I1208 01:56:41.907126 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.907136 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:41.907142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:41.907201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:41.936702 1136586 cri.go:89] found id: ""
	I1208 01:56:41.936729 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.936738 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:41.936748 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:41.936780 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:41.992993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:41.993029 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:42.008895 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:42.008988 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:42.090561 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:42.090592 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:42.090605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:42.127950 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:42.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:44.678288 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:44.690356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:44.690429 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:44.716072 1136586 cri.go:89] found id: ""
	I1208 01:56:44.716095 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.716105 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:44.716111 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:44.716173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:44.742318 1136586 cri.go:89] found id: ""
	I1208 01:56:44.742347 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.742357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:44.742363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:44.742428 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:44.768786 1136586 cri.go:89] found id: ""
	I1208 01:56:44.768814 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.768824 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:44.768830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:44.768892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:44.794997 1136586 cri.go:89] found id: ""
	I1208 01:56:44.795020 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.795028 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:44.795035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:44.795093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:44.824626 1136586 cri.go:89] found id: ""
	I1208 01:56:44.824693 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.824719 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:44.824738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:44.824823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:44.854631 1136586 cri.go:89] found id: ""
	I1208 01:56:44.854660 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.854682 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:44.854707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:44.854790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:44.886832 1136586 cri.go:89] found id: ""
	I1208 01:56:44.886853 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.886862 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:44.886868 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:44.886931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:44.918383 1136586 cri.go:89] found id: ""
	I1208 01:56:44.918409 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.918420 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:44.918430 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:44.918441 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:44.974124 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:44.974160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:44.989499 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:44.989581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:45.183353 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:45.183384 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:45.183415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:45.225041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:45.225130 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:47.776374 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:47.786874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:47.786944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:47.817071 1136586 cri.go:89] found id: ""
	I1208 01:56:47.817097 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.817106 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:47.817113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:47.817173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:47.848935 1136586 cri.go:89] found id: ""
	I1208 01:56:47.848964 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.848972 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:47.848978 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:47.849039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:47.879145 1136586 cri.go:89] found id: ""
	I1208 01:56:47.879175 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.879190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:47.879196 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:47.879255 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:47.919571 1136586 cri.go:89] found id: ""
	I1208 01:56:47.919595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.919605 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:47.919612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:47.919678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:47.945072 1136586 cri.go:89] found id: ""
	I1208 01:56:47.945098 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.945107 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:47.945113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:47.945176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:47.972399 1136586 cri.go:89] found id: ""
	I1208 01:56:47.972423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.972432 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:47.972446 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:47.972513 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:47.998198 1136586 cri.go:89] found id: ""
	I1208 01:56:47.998225 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.998234 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:47.998240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:47.998357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:48.026417 1136586 cri.go:89] found id: ""
	I1208 01:56:48.026469 1136586 logs.go:282] 0 containers: []
	W1208 01:56:48.026480 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:48.026514 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:48.026534 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:48.083726 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:48.083765 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:48.102473 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:48.102503 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:48.195413 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:48.195448 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:48.195461 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:48.222088 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:48.222125 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:50.752185 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.763217 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:50.763296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:50.792851 1136586 cri.go:89] found id: ""
	I1208 01:56:50.792877 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.792886 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:50.792893 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:50.792952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:50.818544 1136586 cri.go:89] found id: ""
	I1208 01:56:50.818573 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.818582 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:50.818590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:50.818653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:50.856256 1136586 cri.go:89] found id: ""
	I1208 01:56:50.856286 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.856296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:50.856303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:50.856365 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:50.890254 1136586 cri.go:89] found id: ""
	I1208 01:56:50.890277 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.890286 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:50.890292 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:50.890351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:50.919013 1136586 cri.go:89] found id: ""
	I1208 01:56:50.919039 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.919048 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:50.919054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:50.919115 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:50.943865 1136586 cri.go:89] found id: ""
	I1208 01:56:50.943888 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.943897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:50.943903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:50.943968 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:50.967885 1136586 cri.go:89] found id: ""
	I1208 01:56:50.967912 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.967921 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:50.967927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:50.967984 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:50.997744 1136586 cri.go:89] found id: ""
	I1208 01:56:50.997779 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.997788 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:50.997854 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:50.997874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:51.066108 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:51.066131 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:51.066144 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:51.092098 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:51.092134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:51.129363 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:51.129392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:51.192049 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:51.192086 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:53.707235 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.718177 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:53.718245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:53.743649 1136586 cri.go:89] found id: ""
	I1208 01:56:53.743674 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.743684 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:53.743690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:53.743755 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:53.769475 1136586 cri.go:89] found id: ""
	I1208 01:56:53.769503 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.769512 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:53.769519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:53.769581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:53.795104 1136586 cri.go:89] found id: ""
	I1208 01:56:53.795128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.795137 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:53.795143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:53.795219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:53.824300 1136586 cri.go:89] found id: ""
	I1208 01:56:53.824322 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.824335 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:53.824342 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:53.824403 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:53.858957 1136586 cri.go:89] found id: ""
	I1208 01:56:53.858984 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.858993 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:53.858999 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:53.859059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:53.889936 1136586 cri.go:89] found id: ""
	I1208 01:56:53.889958 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.889967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:53.889974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:53.890042 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:53.917197 1136586 cri.go:89] found id: ""
	I1208 01:56:53.917221 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.917230 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:53.917236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:53.917301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:53.944246 1136586 cri.go:89] found id: ""
	I1208 01:56:53.944313 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.944340 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:53.944364 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:53.944395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:54.000224 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:54.000263 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:54.018576 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:54.018610 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:54.091957 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:54.092037 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:54.092064 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:54.121226 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:54.121262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.665113 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.675727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:56.675793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:56.702486 1136586 cri.go:89] found id: ""
	I1208 01:56:56.702512 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.702521 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:56.702536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:56.702595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:56.727464 1136586 cri.go:89] found id: ""
	I1208 01:56:56.727490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.727499 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:56.727506 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:56.727574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:56.755210 1136586 cri.go:89] found id: ""
	I1208 01:56:56.755242 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.755252 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:56.755259 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:56.755317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:56.780366 1136586 cri.go:89] found id: ""
	I1208 01:56:56.780394 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.780403 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:56.780409 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:56.780502 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:56.805514 1136586 cri.go:89] found id: ""
	I1208 01:56:56.805541 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.805551 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:56.805557 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:56.805615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:56.830960 1136586 cri.go:89] found id: ""
	I1208 01:56:56.830985 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.830994 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:56.831001 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:56.831067 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:56.877742 1136586 cri.go:89] found id: ""
	I1208 01:56:56.877812 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.877847 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:56.877873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:56.877969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:56.909088 1136586 cri.go:89] found id: ""
	I1208 01:56:56.909173 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.909197 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:56.909218 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:56.909261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:56.937087 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:56.937122 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.964566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:56.964593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:57.025871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:57.025917 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:57.041167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:57.041200 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:57.113620 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:59.615300 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.625998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:59.626071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:59.651013 1136586 cri.go:89] found id: ""
	I1208 01:56:59.651040 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.651050 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:59.651058 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:59.651140 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:59.676526 1136586 cri.go:89] found id: ""
	I1208 01:56:59.676595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.676619 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:59.676632 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:59.676706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:59.705956 1136586 cri.go:89] found id: ""
	I1208 01:56:59.705982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.705992 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:59.705998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:59.706058 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:59.732960 1136586 cri.go:89] found id: ""
	I1208 01:56:59.732988 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.732998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:59.733004 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:59.733064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:59.761227 1136586 cri.go:89] found id: ""
	I1208 01:56:59.761253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.761262 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:59.761268 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:59.761332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:59.795189 1136586 cri.go:89] found id: ""
	I1208 01:56:59.795218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.795227 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:59.795235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:59.795296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:59.820209 1136586 cri.go:89] found id: ""
	I1208 01:56:59.820278 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.820303 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:59.820317 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:59.820397 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:59.854906 1136586 cri.go:89] found id: ""
	I1208 01:56:59.854982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.855003 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:59.855031 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:59.855075 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:59.895804 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:59.895880 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:59.953038 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:59.953076 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:59.968348 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:59.968383 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:00.183275 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:00.183303 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:00.183318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:02.767941 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.778692 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:02.778767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:02.804099 1136586 cri.go:89] found id: ""
	I1208 01:57:02.804168 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.804192 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:02.804207 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:02.804282 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:02.829415 1136586 cri.go:89] found id: ""
	I1208 01:57:02.829442 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.829451 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:02.829456 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:02.829516 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:02.876418 1136586 cri.go:89] found id: ""
	I1208 01:57:02.876448 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.876456 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:02.876462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:02.876521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:02.908999 1136586 cri.go:89] found id: ""
	I1208 01:57:02.909021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.909030 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:02.909036 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:02.909095 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:02.935740 1136586 cri.go:89] found id: ""
	I1208 01:57:02.935763 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.935772 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:02.935781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:02.935845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:02.962615 1136586 cri.go:89] found id: ""
	I1208 01:57:02.962640 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.962649 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:02.962676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:02.962762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:02.988338 1136586 cri.go:89] found id: ""
	I1208 01:57:02.988413 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.988447 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:02.988469 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:02.988563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:03.016087 1136586 cri.go:89] found id: ""
	I1208 01:57:03.016115 1136586 logs.go:282] 0 containers: []
	W1208 01:57:03.016125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:03.016135 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:03.016147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:03.045768 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:03.045798 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:03.103820 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:03.103856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:03.119506 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:03.119544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:03.188553 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:03.188577 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:03.188591 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.714622 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.728070 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:05.728144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:05.752683 1136586 cri.go:89] found id: ""
	I1208 01:57:05.752709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.752718 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:05.752725 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:05.752804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:05.777888 1136586 cri.go:89] found id: ""
	I1208 01:57:05.777926 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.777935 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:05.777941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:05.778004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:05.803200 1136586 cri.go:89] found id: ""
	I1208 01:57:05.803227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.803236 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:05.803243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:05.803305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:05.828694 1136586 cri.go:89] found id: ""
	I1208 01:57:05.828719 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.828728 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:05.828734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:05.828795 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:05.871706 1136586 cri.go:89] found id: ""
	I1208 01:57:05.871734 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.871743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:05.871750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:05.871810 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:05.910109 1136586 cri.go:89] found id: ""
	I1208 01:57:05.910130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.910139 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:05.910146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:05.910211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:05.935420 1136586 cri.go:89] found id: ""
	I1208 01:57:05.935446 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.935455 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:05.935463 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:05.935524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:05.964805 1136586 cri.go:89] found id: ""
	I1208 01:57:05.964830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.964840 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:05.964850 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:05.964861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.991812 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:05.991850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:06.023289 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:06.023318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:06.079947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:06.079984 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:06.094973 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:06.095001 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:06.164494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:08.664783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.675873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:08.675951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:08.701544 1136586 cri.go:89] found id: ""
	I1208 01:57:08.701570 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.701579 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:08.701585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:08.701644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:08.726739 1136586 cri.go:89] found id: ""
	I1208 01:57:08.726761 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.726770 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:08.726777 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:08.726834 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:08.752551 1136586 cri.go:89] found id: ""
	I1208 01:57:08.752579 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.752590 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:08.752596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:08.752661 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:08.785394 1136586 cri.go:89] found id: ""
	I1208 01:57:08.785418 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.785427 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:08.785434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:08.785494 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:08.809379 1136586 cri.go:89] found id: ""
	I1208 01:57:08.809411 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.809420 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:08.809426 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:08.809493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:08.834793 1136586 cri.go:89] found id: ""
	I1208 01:57:08.834820 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.834829 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:08.834836 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:08.834895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:08.871040 1136586 cri.go:89] found id: ""
	I1208 01:57:08.871067 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.871077 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:08.871083 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:08.871149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:08.898916 1136586 cri.go:89] found id: ""
	I1208 01:57:08.898943 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.898953 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:08.898961 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:08.898973 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:08.958751 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:08.958791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:08.975804 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:08.975842 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:09.045728 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:09.045754 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:09.045768 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:09.071802 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:09.071844 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:11.602631 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.621366 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:11.621447 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:11.654343 1136586 cri.go:89] found id: ""
	I1208 01:57:11.654378 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.654387 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:11.654396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:11.654496 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:11.687384 1136586 cri.go:89] found id: ""
	I1208 01:57:11.687421 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.687431 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:11.687444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:11.687515 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:11.716671 1136586 cri.go:89] found id: ""
	I1208 01:57:11.716709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.716720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:11.716726 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:11.716796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:11.742357 1136586 cri.go:89] found id: ""
	I1208 01:57:11.742391 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.742400 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:11.742407 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:11.742493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:11.768963 1136586 cri.go:89] found id: ""
	I1208 01:57:11.768990 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.768999 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:11.769006 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:11.769075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:11.793322 1136586 cri.go:89] found id: ""
	I1208 01:57:11.793354 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.793364 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:11.793371 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:11.793438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:11.819428 1136586 cri.go:89] found id: ""
	I1208 01:57:11.819473 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.819483 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:11.819490 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:11.819561 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:11.856579 1136586 cri.go:89] found id: ""
	I1208 01:57:11.856620 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.856629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:11.856639 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:11.856650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:11.920066 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:11.920104 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:11.936490 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:11.936579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:12.003301 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:12.003353 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:12.003368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:12.034123 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:12.034162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:14.566675 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.577850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:14.577926 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:14.614645 1136586 cri.go:89] found id: ""
	I1208 01:57:14.614674 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.614683 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:14.614689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:14.614746 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:14.653668 1136586 cri.go:89] found id: ""
	I1208 01:57:14.653689 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.653698 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:14.653704 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:14.653760 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:14.683123 1136586 cri.go:89] found id: ""
	I1208 01:57:14.683147 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.683155 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:14.683162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:14.683220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:14.712290 1136586 cri.go:89] found id: ""
	I1208 01:57:14.712317 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.712326 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:14.712333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:14.712411 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:14.741728 1136586 cri.go:89] found id: ""
	I1208 01:57:14.741752 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.741761 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:14.741768 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:14.741830 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:14.766640 1136586 cri.go:89] found id: ""
	I1208 01:57:14.766675 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.766684 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:14.766690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:14.766749 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:14.795809 1136586 cri.go:89] found id: ""
	I1208 01:57:14.795833 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.795843 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:14.795850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:14.795908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:14.824523 1136586 cri.go:89] found id: ""
	I1208 01:57:14.824546 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.824555 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:14.824564 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:14.824579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:14.883992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:14.884032 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:14.899927 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:14.899958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:14.971584 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:14.971605 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:14.971618 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:14.997478 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:14.997516 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.562433 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.573169 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:17.573243 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:17.604838 1136586 cri.go:89] found id: ""
	I1208 01:57:17.604866 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.604879 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:17.604885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:17.604945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:17.651166 1136586 cri.go:89] found id: ""
	I1208 01:57:17.651193 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.651202 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:17.651208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:17.651275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:17.679266 1136586 cri.go:89] found id: ""
	I1208 01:57:17.679302 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.679312 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:17.679318 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:17.679379 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:17.703476 1136586 cri.go:89] found id: ""
	I1208 01:57:17.703504 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.703513 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:17.703519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:17.703579 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:17.732349 1136586 cri.go:89] found id: ""
	I1208 01:57:17.732377 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.732386 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:17.732393 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:17.732461 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:17.761008 1136586 cri.go:89] found id: ""
	I1208 01:57:17.761033 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.761042 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:17.761053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:17.761112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:17.789502 1136586 cri.go:89] found id: ""
	I1208 01:57:17.789527 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.789536 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:17.789543 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:17.789599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:17.814915 1136586 cri.go:89] found id: ""
	I1208 01:57:17.814938 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.814947 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:17.814958 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:17.814971 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:17.901464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:17.901483 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:17.901496 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:17.927699 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:17.927737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.956480 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:17.956506 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:18.016061 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:18.016103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.532462 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.543127 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:20.543203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:20.568124 1136586 cri.go:89] found id: ""
	I1208 01:57:20.568149 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.568158 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:20.568167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:20.568227 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:20.603985 1136586 cri.go:89] found id: ""
	I1208 01:57:20.604021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.604030 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:20.604037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:20.604106 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:20.636556 1136586 cri.go:89] found id: ""
	I1208 01:57:20.636588 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.636597 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:20.636603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:20.636671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:20.672751 1136586 cri.go:89] found id: ""
	I1208 01:57:20.672825 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.672860 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:20.672885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:20.672980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:20.701486 1136586 cri.go:89] found id: ""
	I1208 01:57:20.701557 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.701593 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:20.701617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:20.701708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:20.727838 1136586 cri.go:89] found id: ""
	I1208 01:57:20.727863 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.727873 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:20.727897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:20.727958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:20.757101 1136586 cri.go:89] found id: ""
	I1208 01:57:20.757126 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.757135 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:20.757142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:20.757204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:20.786936 1136586 cri.go:89] found id: ""
	I1208 01:57:20.786961 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.786970 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:20.786981 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:20.786995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.801478 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:20.801508 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:20.873983 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:20.874054 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:20.874087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:20.901450 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:20.901529 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:20.934263 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:20.934288 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.489851 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.500424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:23.500500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:23.526190 1136586 cri.go:89] found id: ""
	I1208 01:57:23.526216 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.526225 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:23.526232 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:23.526294 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:23.552764 1136586 cri.go:89] found id: ""
	I1208 01:57:23.552790 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.552799 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:23.552806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:23.552868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:23.577380 1136586 cri.go:89] found id: ""
	I1208 01:57:23.577406 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.577414 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:23.577421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:23.577481 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:23.608802 1136586 cri.go:89] found id: ""
	I1208 01:57:23.608830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.608839 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:23.608846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:23.608910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:23.634994 1136586 cri.go:89] found id: ""
	I1208 01:57:23.635020 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.635029 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:23.635035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:23.635096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:23.663236 1136586 cri.go:89] found id: ""
	I1208 01:57:23.663261 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.663270 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:23.663277 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:23.663350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:23.688872 1136586 cri.go:89] found id: ""
	I1208 01:57:23.688898 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.688907 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:23.688914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:23.688973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:23.714286 1136586 cri.go:89] found id: ""
	I1208 01:57:23.714312 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.714320 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:23.714329 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:23.714345 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:23.742945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:23.742972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.798260 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:23.798300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:23.813312 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:23.813340 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:23.892723 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:23.892748 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:23.892762 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.422664 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.433380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:26.433455 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:26.465015 1136586 cri.go:89] found id: ""
	I1208 01:57:26.465039 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.465048 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:26.465055 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:26.465113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:26.493403 1136586 cri.go:89] found id: ""
	I1208 01:57:26.493429 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.493438 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:26.493449 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:26.493537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:26.519773 1136586 cri.go:89] found id: ""
	I1208 01:57:26.519799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.519814 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:26.519821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:26.519883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:26.548992 1136586 cri.go:89] found id: ""
	I1208 01:57:26.549025 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.549037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:26.549047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:26.549127 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:26.574005 1136586 cri.go:89] found id: ""
	I1208 01:57:26.574031 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.574041 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:26.574047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:26.574111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:26.609416 1136586 cri.go:89] found id: ""
	I1208 01:57:26.609443 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.609452 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:26.609459 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:26.609517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:26.640996 1136586 cri.go:89] found id: ""
	I1208 01:57:26.641021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.641031 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:26.641037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:26.641096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:26.667832 1136586 cri.go:89] found id: ""
	I1208 01:57:26.667861 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.667870 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:26.667880 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:26.667911 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:26.727920 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:26.727958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:26.743134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:26.743167 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:26.805654 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:26.805676 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:26.805689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.833117 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:26.833153 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.374479 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.385263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:29.385343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:29.411850 1136586 cri.go:89] found id: ""
	I1208 01:57:29.411881 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.411890 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:29.411897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:29.411957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:29.436577 1136586 cri.go:89] found id: ""
	I1208 01:57:29.436650 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.436667 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:29.436674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:29.436741 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:29.461265 1136586 cri.go:89] found id: ""
	I1208 01:57:29.461287 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.461296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:29.461302 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:29.461375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:29.485998 1136586 cri.go:89] found id: ""
	I1208 01:57:29.486024 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.486033 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:29.486039 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:29.486102 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:29.515456 1136586 cri.go:89] found id: ""
	I1208 01:57:29.515482 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.515491 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:29.515498 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:29.515574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:29.540631 1136586 cri.go:89] found id: ""
	I1208 01:57:29.540658 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.540667 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:29.540674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:29.540771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:29.569112 1136586 cri.go:89] found id: ""
	I1208 01:57:29.569156 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.569182 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:29.569194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:29.569276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:29.601158 1136586 cri.go:89] found id: ""
	I1208 01:57:29.601182 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.601192 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:29.601201 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:29.601213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:29.681907 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:29.681933 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:29.681946 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:29.707746 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:29.707781 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.740008 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:29.740036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:29.795859 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:29.795893 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.311192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.322374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:32.322487 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:32.352628 1136586 cri.go:89] found id: ""
	I1208 01:57:32.352653 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.352662 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:32.352668 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:32.352727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:32.379283 1136586 cri.go:89] found id: ""
	I1208 01:57:32.379308 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.379317 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:32.379323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:32.379383 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:32.405884 1136586 cri.go:89] found id: ""
	I1208 01:57:32.405911 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.405919 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:32.405926 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:32.405985 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:32.431914 1136586 cri.go:89] found id: ""
	I1208 01:57:32.431939 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.431948 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:32.431958 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:32.432019 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:32.456763 1136586 cri.go:89] found id: ""
	I1208 01:57:32.456791 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.456799 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:32.456806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:32.456868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:32.482420 1136586 cri.go:89] found id: ""
	I1208 01:57:32.482467 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.482476 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:32.482483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:32.482550 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:32.507167 1136586 cri.go:89] found id: ""
	I1208 01:57:32.507201 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.507210 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:32.507218 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:32.507281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:32.532583 1136586 cri.go:89] found id: ""
	I1208 01:57:32.532612 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.532621 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:32.532630 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:32.532642 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:32.562135 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:32.562163 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:32.619510 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:32.619544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.636767 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:32.636845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:32.721264 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:32.721287 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:32.721300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:35.247026 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.260135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:35.260203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:35.288106 1136586 cri.go:89] found id: ""
	I1208 01:57:35.288130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.288138 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:35.288146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:35.288206 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:35.314646 1136586 cri.go:89] found id: ""
	I1208 01:57:35.314672 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.314682 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:35.314689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:35.314777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:35.342658 1136586 cri.go:89] found id: ""
	I1208 01:57:35.342685 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.342693 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:35.342700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:35.342762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:35.367839 1136586 cri.go:89] found id: ""
	I1208 01:57:35.367862 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.367870 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:35.367877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:35.367937 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:35.392345 1136586 cri.go:89] found id: ""
	I1208 01:57:35.392419 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.392449 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:35.392461 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:35.392525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:35.417214 1136586 cri.go:89] found id: ""
	I1208 01:57:35.417241 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.417250 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:35.417257 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:35.417318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:35.444512 1136586 cri.go:89] found id: ""
	I1208 01:57:35.444538 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.444546 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:35.444556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:35.444614 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:35.470153 1136586 cri.go:89] found id: ""
	I1208 01:57:35.470227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.470250 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:35.470272 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:35.470310 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:35.497905 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:35.497934 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:35.553331 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:35.553369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:35.568215 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:35.568246 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:35.665180 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:35.665205 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:35.665219 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.193386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.204636 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:38.204720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:38.230690 1136586 cri.go:89] found id: ""
	I1208 01:57:38.230717 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.230726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:38.230732 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:38.230791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:38.255363 1136586 cri.go:89] found id: ""
	I1208 01:57:38.255385 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.255394 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:38.255401 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:38.255460 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:38.282875 1136586 cri.go:89] found id: ""
	I1208 01:57:38.282899 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.282907 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:38.282914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:38.282980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:38.308397 1136586 cri.go:89] found id: ""
	I1208 01:57:38.308422 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.308437 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:38.308443 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:38.308505 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:38.334844 1136586 cri.go:89] found id: ""
	I1208 01:57:38.334871 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.334880 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:38.334886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:38.334945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:38.360635 1136586 cri.go:89] found id: ""
	I1208 01:57:38.360659 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.360669 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:38.360676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:38.360737 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:38.385673 1136586 cri.go:89] found id: ""
	I1208 01:57:38.385702 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.385710 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:38.385717 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:38.385776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:38.410525 1136586 cri.go:89] found id: ""
	I1208 01:57:38.410560 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.410569 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:38.410578 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:38.410589 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:38.467839 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:38.467874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:38.482720 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:38.482748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:38.547244 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:38.547268 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:38.547282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.573312 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:38.573350 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.116290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:41.132190 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:41.132273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:41.164024 1136586 cri.go:89] found id: ""
	I1208 01:57:41.164049 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.164058 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:41.164064 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:41.164126 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:41.190343 1136586 cri.go:89] found id: ""
	I1208 01:57:41.190380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.190390 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:41.190396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:41.190480 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:41.215567 1136586 cri.go:89] found id: ""
	I1208 01:57:41.215591 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.215600 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:41.215607 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:41.215712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:41.241307 1136586 cri.go:89] found id: ""
	I1208 01:57:41.241380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.241404 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:41.241424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:41.241510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:41.266598 1136586 cri.go:89] found id: ""
	I1208 01:57:41.266666 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.266682 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:41.266689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:41.266748 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:41.292745 1136586 cri.go:89] found id: ""
	I1208 01:57:41.292806 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.292833 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:41.292851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:41.292947 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:41.322477 1136586 cri.go:89] found id: ""
	I1208 01:57:41.322503 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.322528 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:41.322534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:41.322598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:41.348001 1136586 cri.go:89] found id: ""
	I1208 01:57:41.348028 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.348037 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:41.348047 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:41.348059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:41.413651 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:41.413677 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:41.413690 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:41.443591 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:41.443637 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.475807 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:41.475839 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:41.531946 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:41.531985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.047381 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.058560 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:44.058632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:44.086946 1136586 cri.go:89] found id: ""
	I1208 01:57:44.086974 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.086983 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:44.086990 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:44.087055 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:44.119808 1136586 cri.go:89] found id: ""
	I1208 01:57:44.119837 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.119846 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:44.119853 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:44.119914 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:44.151166 1136586 cri.go:89] found id: ""
	I1208 01:57:44.151189 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.151197 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:44.151204 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:44.151266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:44.179208 1136586 cri.go:89] found id: ""
	I1208 01:57:44.179232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.179240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:44.179247 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:44.179307 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:44.204931 1136586 cri.go:89] found id: ""
	I1208 01:57:44.204957 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.204967 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:44.204973 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:44.205086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:44.233222 1136586 cri.go:89] found id: ""
	I1208 01:57:44.233263 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.233289 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:44.233303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:44.233381 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:44.258112 1136586 cri.go:89] found id: ""
	I1208 01:57:44.258180 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.258204 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:44.258225 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:44.258301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:44.282317 1136586 cri.go:89] found id: ""
	I1208 01:57:44.282339 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.282348 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:44.282358 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:44.282369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:44.337431 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:44.337465 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.352560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:44.352633 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:44.416710 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:44.416734 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:44.416745 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:44.443231 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:44.443264 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:46.971715 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.982590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:46.982716 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:47.013622 1136586 cri.go:89] found id: ""
	I1208 01:57:47.013655 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.013665 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:47.013689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:47.013773 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:47.039262 1136586 cri.go:89] found id: ""
	I1208 01:57:47.039288 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.039298 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:47.039305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:47.039369 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:47.064571 1136586 cri.go:89] found id: ""
	I1208 01:57:47.064597 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.064606 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:47.064612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:47.064671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:47.103360 1136586 cri.go:89] found id: ""
	I1208 01:57:47.103428 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.103452 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:47.103471 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:47.103558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:47.137446 1136586 cri.go:89] found id: ""
	I1208 01:57:47.137514 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.137537 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:47.137556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:47.137643 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:47.167484 1136586 cri.go:89] found id: ""
	I1208 01:57:47.167507 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.167515 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:47.167522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:47.167581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:47.198040 1136586 cri.go:89] found id: ""
	I1208 01:57:47.198072 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.198082 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:47.198088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:47.198155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:47.222585 1136586 cri.go:89] found id: ""
	I1208 01:57:47.222609 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.222618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:47.222635 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:47.222648 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:47.253438 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:47.253468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:47.312655 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:47.312692 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:47.328066 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:47.328146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:47.396328 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:47.396351 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:47.396365 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:49.922587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.933241 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.933357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.957944 1136586 cri.go:89] found id: ""
	I1208 01:57:49.957967 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.957976 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.957983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.958043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.983531 1136586 cri.go:89] found id: ""
	I1208 01:57:49.983556 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.983565 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.983573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.983634 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:50.014921 1136586 cri.go:89] found id: ""
	I1208 01:57:50.014948 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.014958 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:50.014965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:50.015054 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:50.051300 1136586 cri.go:89] found id: ""
	I1208 01:57:50.051356 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.051365 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:50.051373 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:50.051439 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:50.078205 1136586 cri.go:89] found id: ""
	I1208 01:57:50.078232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.078242 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:50.078248 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:50.078313 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:50.116415 1136586 cri.go:89] found id: ""
	I1208 01:57:50.116472 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.116482 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:50.116489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:50.116549 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:50.152924 1136586 cri.go:89] found id: ""
	I1208 01:57:50.152953 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.152962 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:50.152971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:50.153034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:50.183266 1136586 cri.go:89] found id: ""
	I1208 01:57:50.183303 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.183313 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:50.183323 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:50.183339 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:50.219490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:50.219518 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:50.278125 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:50.278160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:50.293360 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:50.293392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:50.361099 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:50.361124 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:50.361137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:52.887762 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:52.898605 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:52.898684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:52.924862 1136586 cri.go:89] found id: ""
	I1208 01:57:52.924888 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.924898 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:52.924904 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:52.924967 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.953738 1136586 cri.go:89] found id: ""
	I1208 01:57:52.953766 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.953775 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.953781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.953841 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.979112 1136586 cri.go:89] found id: ""
	I1208 01:57:52.979135 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.979143 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.979156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.979220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:53.010105 1136586 cri.go:89] found id: ""
	I1208 01:57:53.010136 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.010146 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:53.010153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:53.010224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:53.040709 1136586 cri.go:89] found id: ""
	I1208 01:57:53.040737 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.040746 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:53.040759 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:53.040820 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:53.066591 1136586 cri.go:89] found id: ""
	I1208 01:57:53.066615 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.066624 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:53.066631 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:53.066690 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:53.103691 1136586 cri.go:89] found id: ""
	I1208 01:57:53.103721 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.103730 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:53.103737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:53.103796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:53.135825 1136586 cri.go:89] found id: ""
	I1208 01:57:53.135860 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.135869 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:53.135879 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:53.135892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:53.154871 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:53.154897 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:53.223770 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.223803 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:53.223818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:53.248879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:53.248912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:53.278989 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:53.279015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.836344 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:55.851014 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:55.851088 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:55.880945 1136586 cri.go:89] found id: ""
	I1208 01:57:55.880968 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.880977 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:55.880983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:55.881047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.918324 1136586 cri.go:89] found id: ""
	I1208 01:57:55.918348 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.918357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.918363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.918420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.943772 1136586 cri.go:89] found id: ""
	I1208 01:57:55.943799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.943808 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.943814 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.943872 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.968672 1136586 cri.go:89] found id: ""
	I1208 01:57:55.968695 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.968705 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.968711 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.968772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.993546 1136586 cri.go:89] found id: ""
	I1208 01:57:55.993573 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.993582 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.993588 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.993648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:56.026891 1136586 cri.go:89] found id: ""
	I1208 01:57:56.026916 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.026924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:56.026931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:56.026998 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:56.053302 1136586 cri.go:89] found id: ""
	I1208 01:57:56.053334 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.053344 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:56.053356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:56.053468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:56.079706 1136586 cri.go:89] found id: ""
	I1208 01:57:56.079733 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.079741 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:56.079750 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:56.079761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:56.142320 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:56.142357 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:56.157995 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:56.158067 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:56.221039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:56.221063 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:56.221077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:56.247019 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:56.247058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.775233 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:58.785596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:58.785682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:58.809955 1136586 cri.go:89] found id: ""
	I1208 01:57:58.809986 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.809996 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:58.810002 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:58.810061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:58.835423 1136586 cri.go:89] found id: ""
	I1208 01:57:58.835447 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.835456 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:58.835462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:58.835524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.867905 1136586 cri.go:89] found id: ""
	I1208 01:57:58.867928 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.867937 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.867943 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.868003 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.896767 1136586 cri.go:89] found id: ""
	I1208 01:57:58.896794 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.896803 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.896810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.896868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.926611 1136586 cri.go:89] found id: ""
	I1208 01:57:58.926633 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.926642 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.926648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.926707 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.954977 1136586 cri.go:89] found id: ""
	I1208 01:57:58.955001 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.955010 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.955016 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.955075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.984186 1136586 cri.go:89] found id: ""
	I1208 01:57:58.984209 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.984218 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.984224 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.984286 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:59.011291 1136586 cri.go:89] found id: ""
	I1208 01:57:59.011314 1136586 logs.go:282] 0 containers: []
	W1208 01:57:59.011323 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:59.011333 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:59.011346 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:59.067486 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:59.067520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:59.082307 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:59.082334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:59.162802 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:59.162826 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:59.162838 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:59.187405 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:59.187437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:01.720540 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:01.731197 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:01.731266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:01.756392 1136586 cri.go:89] found id: ""
	I1208 01:58:01.756414 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.756431 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:01.756438 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:01.756504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:01.782980 1136586 cri.go:89] found id: ""
	I1208 01:58:01.783050 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.783074 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:01.783099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:01.783180 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:01.808911 1136586 cri.go:89] found id: ""
	I1208 01:58:01.808947 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.808957 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:01.808964 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:01.809032 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.833417 1136586 cri.go:89] found id: ""
	I1208 01:58:01.833490 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.833514 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.833534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.833644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.863178 1136586 cri.go:89] found id: ""
	I1208 01:58:01.863255 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.863277 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.863296 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.863391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.893466 1136586 cri.go:89] found id: ""
	I1208 01:58:01.893540 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.893562 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.893582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.893669 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.927969 1136586 cri.go:89] found id: ""
	I1208 01:58:01.928046 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.928060 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.928067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.928137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.954102 1136586 cri.go:89] found id: ""
	I1208 01:58:01.954130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.954141 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.954150 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.954162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:02.011065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:02.011103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:02.028187 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:02.028220 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:02.092492 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:02.092518 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:02.092532 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:02.123344 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:02.123377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.657423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:04.669705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:04.669794 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:04.696818 1136586 cri.go:89] found id: ""
	I1208 01:58:04.696848 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.696857 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:04.696864 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:04.696973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:04.723929 1136586 cri.go:89] found id: ""
	I1208 01:58:04.723951 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.723960 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:04.723967 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:04.724028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:04.749688 1136586 cri.go:89] found id: ""
	I1208 01:58:04.749712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.749721 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:04.749727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:04.749790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:04.780181 1136586 cri.go:89] found id: ""
	I1208 01:58:04.780212 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.780223 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:04.780230 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:04.780310 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.805904 1136586 cri.go:89] found id: ""
	I1208 01:58:04.805930 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.805941 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.805947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.806004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.830657 1136586 cri.go:89] found id: ""
	I1208 01:58:04.830682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.830692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.830699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.830765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.870065 1136586 cri.go:89] found id: ""
	I1208 01:58:04.870130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.870152 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.870170 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.870263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.898118 1136586 cri.go:89] found id: ""
	I1208 01:58:04.898185 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.898207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.898228 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.898266 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.931407 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.931433 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.987787 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.987825 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:05.003245 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:05.003331 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:05.079158 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:05.079184 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:05.079196 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:07.607089 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:07.617881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:07.617954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:07.643290 1136586 cri.go:89] found id: ""
	I1208 01:58:07.643356 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.643378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:07.643396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:07.643483 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:07.668986 1136586 cri.go:89] found id: ""
	I1208 01:58:07.669054 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.669078 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:07.669099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:07.669190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:07.703052 1136586 cri.go:89] found id: ""
	I1208 01:58:07.703077 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.703086 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:07.703093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:07.703153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:07.730752 1136586 cri.go:89] found id: ""
	I1208 01:58:07.730780 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.730791 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:07.730801 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:07.730864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.757395 1136586 cri.go:89] found id: ""
	I1208 01:58:07.757420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.757429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.757442 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.757504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.781922 1136586 cri.go:89] found id: ""
	I1208 01:58:07.781946 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.781955 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.781961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.782020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.806746 1136586 cri.go:89] found id: ""
	I1208 01:58:07.806769 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.806778 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.806785 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.806855 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.835050 1136586 cri.go:89] found id: ""
	I1208 01:58:07.835079 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.835088 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.835097 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.835110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.898132 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.898165 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.918936 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.918964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.984291 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.984315 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:07.984328 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:08.010075 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:08.010113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.540471 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:10.551266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:10.551338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:10.577175 1136586 cri.go:89] found id: ""
	I1208 01:58:10.577202 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.577212 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:10.577219 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:10.577281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:10.602532 1136586 cri.go:89] found id: ""
	I1208 01:58:10.602567 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.602577 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:10.602584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:10.602646 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:10.628758 1136586 cri.go:89] found id: ""
	I1208 01:58:10.628782 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.628790 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:10.628796 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:10.628860 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:10.658744 1136586 cri.go:89] found id: ""
	I1208 01:58:10.658767 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.658776 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:10.658783 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:10.658848 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:10.687442 1136586 cri.go:89] found id: ""
	I1208 01:58:10.687466 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.687475 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:10.687483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:10.687547 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.713454 1136586 cri.go:89] found id: ""
	I1208 01:58:10.713527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.713551 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.713573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.713662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.738872 1136586 cri.go:89] found id: ""
	I1208 01:58:10.738896 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.738905 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.738912 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.739073 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.764935 1136586 cri.go:89] found id: ""
	I1208 01:58:10.764962 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.764972 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.764981 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.764995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.822530 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.822568 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.837607 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.837635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.917003 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:10.917024 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:10.917036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:10.943077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.943113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.473561 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:13.484592 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:13.484660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:13.511440 1136586 cri.go:89] found id: ""
	I1208 01:58:13.511463 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.511472 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:13.511478 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:13.511541 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:13.535634 1136586 cri.go:89] found id: ""
	I1208 01:58:13.535659 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.535668 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:13.535675 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:13.535734 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:13.560688 1136586 cri.go:89] found id: ""
	I1208 01:58:13.560712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.560720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:13.560727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:13.560791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:13.586137 1136586 cri.go:89] found id: ""
	I1208 01:58:13.586217 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.586240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:13.586261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:13.586354 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:13.612353 1136586 cri.go:89] found id: ""
	I1208 01:58:13.612378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.612388 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:13.612394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:13.612466 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:13.642171 1136586 cri.go:89] found id: ""
	I1208 01:58:13.642198 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.642208 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:13.642215 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:13.642276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:13.668409 1136586 cri.go:89] found id: ""
	I1208 01:58:13.668440 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.668448 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:13.668455 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:13.668537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.701198 1136586 cri.go:89] found id: ""
	I1208 01:58:13.701223 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.701232 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.701240 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.701252 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.758303 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.758338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.773305 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.773343 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.842494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.842521 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:13.842537 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:13.871092 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.871129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.410612 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:16.421252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:16.421335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:16.448846 1136586 cri.go:89] found id: ""
	I1208 01:58:16.448872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.448880 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:16.448887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:16.448954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:16.478943 1136586 cri.go:89] found id: ""
	I1208 01:58:16.478968 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.478977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:16.478984 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:16.479044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:16.504203 1136586 cri.go:89] found id: ""
	I1208 01:58:16.504230 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.504239 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:16.504245 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:16.504305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:16.531210 1136586 cri.go:89] found id: ""
	I1208 01:58:16.531238 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.531247 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:16.531254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:16.531343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:16.561091 1136586 cri.go:89] found id: ""
	I1208 01:58:16.561122 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.561130 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:16.561137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:16.561199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:16.586402 1136586 cri.go:89] found id: ""
	I1208 01:58:16.586427 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.586435 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:16.586462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:16.586524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:16.611837 1136586 cri.go:89] found id: ""
	I1208 01:58:16.611863 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.611873 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:16.611879 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:16.611961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:16.637357 1136586 cri.go:89] found id: ""
	I1208 01:58:16.637399 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.637408 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:16.637434 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.637468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.692659 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.692739 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:16.709626 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:16.709655 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:16.785738 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.785761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:16.785774 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:16.811061 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.811096 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:19.346587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:19.359091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:19.359159 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:19.395510 1136586 cri.go:89] found id: ""
	I1208 01:58:19.395536 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.395545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:19.395551 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:19.395609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:19.423019 1136586 cri.go:89] found id: ""
	I1208 01:58:19.423044 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.423053 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:19.423059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:19.423120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:19.449460 1136586 cri.go:89] found id: ""
	I1208 01:58:19.449487 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.449496 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:19.449503 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:19.449574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:19.476285 1136586 cri.go:89] found id: ""
	I1208 01:58:19.476311 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.476320 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:19.476327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:19.476387 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:19.504576 1136586 cri.go:89] found id: ""
	I1208 01:58:19.504603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.504613 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:19.504620 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:19.504682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:19.530968 1136586 cri.go:89] found id: ""
	I1208 01:58:19.530994 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.531015 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:19.531023 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:19.531092 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:19.555468 1136586 cri.go:89] found id: ""
	I1208 01:58:19.555492 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.555501 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:19.555508 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:19.555571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:19.580667 1136586 cri.go:89] found id: ""
	I1208 01:58:19.580703 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.580716 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:19.580726 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:19.580737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.638717 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.638754 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.653903 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.653935 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.721039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.721058 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:19.721071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:19.747016 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.747054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.280191 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:22.290698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:22.290771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:22.319983 1136586 cri.go:89] found id: ""
	I1208 01:58:22.320007 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.320016 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:22.320022 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:22.320084 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:22.349912 1136586 cri.go:89] found id: ""
	I1208 01:58:22.349939 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.349949 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:22.349955 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:22.350016 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:22.381227 1136586 cri.go:89] found id: ""
	I1208 01:58:22.381253 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.381262 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:22.381269 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:22.381327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:22.412055 1136586 cri.go:89] found id: ""
	I1208 01:58:22.412130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.412143 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:22.412150 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:22.412219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:22.437094 1136586 cri.go:89] found id: ""
	I1208 01:58:22.437169 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.437193 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:22.437214 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:22.437338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:22.466779 1136586 cri.go:89] found id: ""
	I1208 01:58:22.466809 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.466817 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:22.466824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:22.466888 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:22.492472 1136586 cri.go:89] found id: ""
	I1208 01:58:22.492555 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.492580 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:22.492599 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:22.492683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:22.517817 1136586 cri.go:89] found id: ""
	I1208 01:58:22.517865 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.517875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:22.517884 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:22.517896 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:22.533468 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:22.533495 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.600107 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.600132 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:22.600145 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:22.625768 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.625805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.654249 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:22.654334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.216756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:25.228093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:25.228171 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:25.254793 1136586 cri.go:89] found id: ""
	I1208 01:58:25.254820 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.254840 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:25.254848 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:25.254911 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:25.280729 1136586 cri.go:89] found id: ""
	I1208 01:58:25.280756 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.280765 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:25.280772 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:25.280856 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:25.306714 1136586 cri.go:89] found id: ""
	I1208 01:58:25.306786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.306802 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:25.306809 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:25.306883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:25.333920 1136586 cri.go:89] found id: ""
	I1208 01:58:25.333955 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.333964 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:25.333971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:25.334044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:25.361369 1136586 cri.go:89] found id: ""
	I1208 01:58:25.361396 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.361405 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:25.361412 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:25.361486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:25.392931 1136586 cri.go:89] found id: ""
	I1208 01:58:25.392958 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.392967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:25.392974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:25.393046 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:25.423143 1136586 cri.go:89] found id: ""
	I1208 01:58:25.423168 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.423177 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:25.423183 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:25.423245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:25.452795 1136586 cri.go:89] found id: ""
	I1208 01:58:25.452872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.452888 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:25.452899 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:25.452913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:25.479544 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.479585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:25.510747 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:25.510777 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.566401 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:25.566437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:25.581786 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:25.581816 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:25.653146 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.153984 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:28.164723 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:28.164793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:28.188760 1136586 cri.go:89] found id: ""
	I1208 01:58:28.188786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.188796 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:28.188803 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:28.188865 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:28.213011 1136586 cri.go:89] found id: ""
	I1208 01:58:28.213037 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.213046 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:28.213053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:28.213114 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:28.237473 1136586 cri.go:89] found id: ""
	I1208 01:58:28.237547 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.237559 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:28.237566 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:28.237692 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:28.264353 1136586 cri.go:89] found id: ""
	I1208 01:58:28.264378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.264387 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:28.264394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:28.264478 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:28.289216 1136586 cri.go:89] found id: ""
	I1208 01:58:28.289250 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.289259 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:28.289265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:28.289332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:28.314397 1136586 cri.go:89] found id: ""
	I1208 01:58:28.314431 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.314440 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:28.314480 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:28.314553 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:28.339256 1136586 cri.go:89] found id: ""
	I1208 01:58:28.339290 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.339299 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:28.339305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:28.339372 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:28.376790 1136586 cri.go:89] found id: ""
	I1208 01:58:28.376824 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.376833 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:28.376842 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:28.376854 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:28.412562 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:28.412597 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:28.468784 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:28.468818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:28.483513 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:28.483539 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:28.548999 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.549069 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:28.549088 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.074358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:31.085483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:31.085557 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:31.113378 1136586 cri.go:89] found id: ""
	I1208 01:58:31.113404 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.113413 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:31.113419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:31.113486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:31.151500 1136586 cri.go:89] found id: ""
	I1208 01:58:31.151527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.151537 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:31.151544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:31.151606 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:31.198664 1136586 cri.go:89] found id: ""
	I1208 01:58:31.198692 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.198701 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:31.198708 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:31.198770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:31.225073 1136586 cri.go:89] found id: ""
	I1208 01:58:31.225100 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.225109 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:31.225115 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:31.225178 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:31.253221 1136586 cri.go:89] found id: ""
	I1208 01:58:31.253248 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.253256 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:31.253263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:31.253328 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:31.278685 1136586 cri.go:89] found id: ""
	I1208 01:58:31.278715 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.278724 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:31.278731 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:31.278793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:31.308014 1136586 cri.go:89] found id: ""
	I1208 01:58:31.308040 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.308050 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:31.308057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:31.308118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:31.333618 1136586 cri.go:89] found id: ""
	I1208 01:58:31.333646 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.333655 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:31.333666 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:31.333677 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.360688 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:31.360767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:31.400673 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:31.400748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:31.458405 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:31.458467 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:31.473371 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:31.473403 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:31.535352 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.035643 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:34.047071 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:34.047236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:34.072671 1136586 cri.go:89] found id: ""
	I1208 01:58:34.072696 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.072705 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:34.072712 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:34.072776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:34.102807 1136586 cri.go:89] found id: ""
	I1208 01:58:34.102835 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.102844 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:34.102851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:34.102910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:34.129970 1136586 cri.go:89] found id: ""
	I1208 01:58:34.129998 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.130007 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:34.130017 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:34.130077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:34.156982 1136586 cri.go:89] found id: ""
	I1208 01:58:34.157009 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.157019 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:34.157026 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:34.157086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:34.181976 1136586 cri.go:89] found id: ""
	I1208 01:58:34.182003 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.182013 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:34.182020 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:34.182081 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:34.206537 1136586 cri.go:89] found id: ""
	I1208 01:58:34.206615 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.206630 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:34.206638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:34.206699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:34.236167 1136586 cri.go:89] found id: ""
	I1208 01:58:34.236192 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.236201 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:34.236210 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:34.236270 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:34.262308 1136586 cri.go:89] found id: ""
	I1208 01:58:34.262332 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.262341 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:34.262351 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:34.262363 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:34.317558 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:34.317593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:34.332448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:34.332475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:34.412027 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.412050 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:34.412062 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:34.438062 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:34.438097 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.967795 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.978660 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.978730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:37.012757 1136586 cri.go:89] found id: ""
	I1208 01:58:37.012787 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.012797 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:37.012804 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:37.012878 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:37.041663 1136586 cri.go:89] found id: ""
	I1208 01:58:37.041685 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.041693 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:37.041700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:37.041758 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:37.066610 1136586 cri.go:89] found id: ""
	I1208 01:58:37.066694 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.066716 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:37.066734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:37.066844 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:37.094085 1136586 cri.go:89] found id: ""
	I1208 01:58:37.094162 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.094187 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:37.094209 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:37.094319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:37.132780 1136586 cri.go:89] found id: ""
	I1208 01:58:37.132864 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.132886 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:37.132905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:37.133017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:37.169263 1136586 cri.go:89] found id: ""
	I1208 01:58:37.169340 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.169365 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:37.169386 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:37.169498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:37.194196 1136586 cri.go:89] found id: ""
	I1208 01:58:37.194275 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.194300 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:37.194319 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:37.194404 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:37.219299 1136586 cri.go:89] found id: ""
	I1208 01:58:37.219378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.219415 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:37.219442 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:37.219469 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:37.274745 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:37.274782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:37.289751 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:37.289779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:37.363255 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:37.363297 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:37.363316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:37.401496 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:37.401554 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:39.942202 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:39.953239 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.953312 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.978920 1136586 cri.go:89] found id: ""
	I1208 01:58:39.978943 1136586 logs.go:282] 0 containers: []
	W1208 01:58:39.978952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.978959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.979017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:40.025284 1136586 cri.go:89] found id: ""
	I1208 01:58:40.025316 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.025343 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:40.025352 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:40.025427 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:40.067843 1136586 cri.go:89] found id: ""
	I1208 01:58:40.067869 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.067879 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:40.067886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:40.067952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:40.102669 1136586 cri.go:89] found id: ""
	I1208 01:58:40.102759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.102785 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:40.102806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:40.102923 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:40.150768 1136586 cri.go:89] found id: ""
	I1208 01:58:40.150799 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.150809 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:40.150815 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:40.150881 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:40.179334 1136586 cri.go:89] found id: ""
	I1208 01:58:40.179362 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.179373 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:40.179382 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:40.179453 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:40.208035 1136586 cri.go:89] found id: ""
	I1208 01:58:40.208063 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.208072 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:40.208079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:40.208144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:40.238244 1136586 cri.go:89] found id: ""
	I1208 01:58:40.238286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.238296 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:40.238306 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:40.238320 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:40.264240 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:40.264279 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:40.295875 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:40.295900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:40.355993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:40.356087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:40.374494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:40.374575 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:40.448504 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.948778 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.959677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.959745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.984449 1136586 cri.go:89] found id: ""
	I1208 01:58:42.984474 1136586 logs.go:282] 0 containers: []
	W1208 01:58:42.984483 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.984489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.984555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:43.015138 1136586 cri.go:89] found id: ""
	I1208 01:58:43.015163 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.015172 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:43.015178 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:43.015242 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:43.040581 1136586 cri.go:89] found id: ""
	I1208 01:58:43.040608 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.040617 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:43.040623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:43.040685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:43.066316 1136586 cri.go:89] found id: ""
	I1208 01:58:43.066345 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.066367 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:43.066374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:43.066484 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:43.095034 1136586 cri.go:89] found id: ""
	I1208 01:58:43.095062 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.095071 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:43.095077 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:43.095137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:43.129297 1136586 cri.go:89] found id: ""
	I1208 01:58:43.129323 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.129333 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:43.129340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:43.129413 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:43.160843 1136586 cri.go:89] found id: ""
	I1208 01:58:43.160912 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.160929 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:43.160937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:43.161012 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:43.189017 1136586 cri.go:89] found id: ""
	I1208 01:58:43.189043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.189051 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:43.189060 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:43.189071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:43.245153 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:43.245189 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:43.260337 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:43.260380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:43.329966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:43.329985 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:43.329998 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:43.357975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:43.358058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.892416 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.902821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.902893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.929257 1136586 cri.go:89] found id: ""
	I1208 01:58:45.929283 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.929292 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.929299 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.929357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.954817 1136586 cri.go:89] found id: ""
	I1208 01:58:45.954851 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.954861 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.954867 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.954928 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.980153 1136586 cri.go:89] found id: ""
	I1208 01:58:45.980183 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.980196 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.980202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.980263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:46.009369 1136586 cri.go:89] found id: ""
	I1208 01:58:46.009398 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.009408 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:46.009415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:46.009555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:46.035686 1136586 cri.go:89] found id: ""
	I1208 01:58:46.035713 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.035736 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:46.035743 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:46.035815 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:46.065295 1136586 cri.go:89] found id: ""
	I1208 01:58:46.065327 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.065337 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:46.065344 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:46.065414 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:46.104678 1136586 cri.go:89] found id: ""
	I1208 01:58:46.104746 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.104769 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:46.104790 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:46.104877 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:46.134606 1136586 cri.go:89] found id: ""
	I1208 01:58:46.134682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.134705 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:46.134727 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:46.134766 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:46.198135 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:46.198171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:46.213155 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:46.213180 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:46.287421 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:46.287443 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:46.287456 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:46.313370 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:46.313405 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:48.849489 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.861044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.861117 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.886203 1136586 cri.go:89] found id: ""
	I1208 01:58:48.886227 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.886237 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.886243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.886305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.911152 1136586 cri.go:89] found id: ""
	I1208 01:58:48.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.911187 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.911193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.911275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.935595 1136586 cri.go:89] found id: ""
	I1208 01:58:48.935620 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.935629 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.935635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.935750 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.959533 1136586 cri.go:89] found id: ""
	I1208 01:58:48.959558 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.959566 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.959573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.959631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.985031 1136586 cri.go:89] found id: ""
	I1208 01:58:48.985057 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.985066 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.985073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.985176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:49.014577 1136586 cri.go:89] found id: ""
	I1208 01:58:49.014603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.014612 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:49.014619 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:49.014679 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:49.038952 1136586 cri.go:89] found id: ""
	I1208 01:58:49.038978 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.038987 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:49.038993 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:49.039051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:49.063733 1136586 cri.go:89] found id: ""
	I1208 01:58:49.063759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.063768 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:49.063777 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:49.063788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:49.097818 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:49.097852 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:49.161476 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:49.161513 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:49.178959 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:49.178995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:49.243404 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:49.243465 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:49.243502 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:51.768803 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.780779 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.780851 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.808733 1136586 cri.go:89] found id: ""
	I1208 01:58:51.808759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.808768 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.808775 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.808846 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.835560 1136586 cri.go:89] found id: ""
	I1208 01:58:51.835587 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.835599 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.835606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.835670 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.860461 1136586 cri.go:89] found id: ""
	I1208 01:58:51.860485 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.860494 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.860501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.860562 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.885253 1136586 cri.go:89] found id: ""
	I1208 01:58:51.885286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.885294 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.885303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.885373 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.909393 1136586 cri.go:89] found id: ""
	I1208 01:58:51.909420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.909429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.909436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.909498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.934211 1136586 cri.go:89] found id: ""
	I1208 01:58:51.934245 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.934254 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.934261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.934331 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.958861 1136586 cri.go:89] found id: ""
	I1208 01:58:51.958887 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.958896 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.958903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.958961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.984069 1136586 cri.go:89] found id: ""
	I1208 01:58:51.984095 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.984106 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.984115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.984146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:51.999081 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.999109 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:52.068304 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:52.068327 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:52.068341 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:52.094374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:52.094481 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:52.127916 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:52.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.695208 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.706109 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.706218 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.731787 1136586 cri.go:89] found id: ""
	I1208 01:58:54.731814 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.731823 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.731835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.731895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.760606 1136586 cri.go:89] found id: ""
	I1208 01:58:54.760631 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.760639 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.760646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.760706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.786598 1136586 cri.go:89] found id: ""
	I1208 01:58:54.786626 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.786635 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.786641 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.786699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.816536 1136586 cri.go:89] found id: ""
	I1208 01:58:54.816562 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.816572 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.816579 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.816641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.845022 1136586 cri.go:89] found id: ""
	I1208 01:58:54.845048 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.845056 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.845063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.845125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.870700 1136586 cri.go:89] found id: ""
	I1208 01:58:54.870725 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.870734 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.870741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.870799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.899897 1136586 cri.go:89] found id: ""
	I1208 01:58:54.899923 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.899934 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.899941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.900002 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.928551 1136586 cri.go:89] found id: ""
	I1208 01:58:54.928575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.928584 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.928593 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.928606 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.991743 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.991769 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:54.991782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:55.022605 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:55.022696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:55.052018 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:55.052044 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:55.112862 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:55.112979 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.628955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.639865 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.639964 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.667931 1136586 cri.go:89] found id: ""
	I1208 01:58:57.667954 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.667962 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.667969 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.668039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.696303 1136586 cri.go:89] found id: ""
	I1208 01:58:57.696328 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.696337 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.696343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.696402 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.720015 1136586 cri.go:89] found id: ""
	I1208 01:58:57.720043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.720052 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.720059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.720120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.748838 1136586 cri.go:89] found id: ""
	I1208 01:58:57.748910 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.748934 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.748953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.749033 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.776554 1136586 cri.go:89] found id: ""
	I1208 01:58:57.776575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.776584 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.776591 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.776648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.800791 1136586 cri.go:89] found id: ""
	I1208 01:58:57.800815 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.800823 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.800830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.800904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.825904 1136586 cri.go:89] found id: ""
	I1208 01:58:57.825975 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.825998 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.826021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.826157 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.853294 1136586 cri.go:89] found id: ""
	I1208 01:58:57.853318 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.853327 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.853336 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.853348 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.868267 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.868292 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.934230 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.934259 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:57.934274 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:57.960735 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.960767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:57.989741 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.989770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.546140 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.557379 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.557497 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.583568 1136586 cri.go:89] found id: ""
	I1208 01:59:00.583595 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.583605 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.583611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.583695 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.615812 1136586 cri.go:89] found id: ""
	I1208 01:59:00.615838 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.615847 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.615856 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.615924 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.642865 1136586 cri.go:89] found id: ""
	I1208 01:59:00.642905 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.642914 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.642921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.642991 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.669343 1136586 cri.go:89] found id: ""
	I1208 01:59:00.669418 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.669434 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.669441 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.669501 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.695611 1136586 cri.go:89] found id: ""
	I1208 01:59:00.695688 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.695702 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.695709 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.695774 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.721947 1136586 cri.go:89] found id: ""
	I1208 01:59:00.721974 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.721983 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.721989 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.722059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.747456 1136586 cri.go:89] found id: ""
	I1208 01:59:00.747485 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.747493 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.747500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.747567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.774802 1136586 cri.go:89] found id: ""
	I1208 01:59:00.774868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.774884 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.774894 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.774906 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.832246 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.832282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.847202 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.847231 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.912820 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.912843 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:00.912856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:00.938649 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.938689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.468247 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.479180 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.479248 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.503843 1136586 cri.go:89] found id: ""
	I1208 01:59:03.503868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.503877 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.503884 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.503946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.533070 1136586 cri.go:89] found id: ""
	I1208 01:59:03.533092 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.533101 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.533107 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.533173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.560639 1136586 cri.go:89] found id: ""
	I1208 01:59:03.560662 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.560670 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.560677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.560738 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.589123 1136586 cri.go:89] found id: ""
	I1208 01:59:03.589150 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.589159 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.589165 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.589225 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.620870 1136586 cri.go:89] found id: ""
	I1208 01:59:03.620893 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.620902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.620908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.620966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.648582 1136586 cri.go:89] found id: ""
	I1208 01:59:03.648607 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.648616 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.648623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.648688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.676092 1136586 cri.go:89] found id: ""
	I1208 01:59:03.676117 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.676125 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.676131 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.676193 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.704985 1136586 cri.go:89] found id: ""
	I1208 01:59:03.705012 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.705021 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.705031 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.705048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.762437 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.762476 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.777354 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.777423 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.852604 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.852630 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:03.852644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:03.877929 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.877964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.407680 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.418391 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.418489 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.448290 1136586 cri.go:89] found id: ""
	I1208 01:59:06.448312 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.448321 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.448327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.448386 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.473926 1136586 cri.go:89] found id: ""
	I1208 01:59:06.473958 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.473967 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.473974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.474037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.499614 1136586 cri.go:89] found id: ""
	I1208 01:59:06.499640 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.499649 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.499656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.499717 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.526871 1136586 cri.go:89] found id: ""
	I1208 01:59:06.526895 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.526904 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.526910 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.526970 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.551675 1136586 cri.go:89] found id: ""
	I1208 01:59:06.551706 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.551716 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.551722 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.551797 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.576680 1136586 cri.go:89] found id: ""
	I1208 01:59:06.576705 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.576714 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.576724 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.576784 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.613884 1136586 cri.go:89] found id: ""
	I1208 01:59:06.613921 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.613930 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.613939 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.614010 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.642583 1136586 cri.go:89] found id: ""
	I1208 01:59:06.642619 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.642629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.642638 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.642650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.709864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.709936 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:06.709962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:06.739423 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.739463 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.767654 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.767684 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.826250 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.826285 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.342623 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.355321 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.355406 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.392040 1136586 cri.go:89] found id: ""
	I1208 01:59:09.392067 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.392080 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.392091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.392161 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.420346 1136586 cri.go:89] found id: ""
	I1208 01:59:09.420372 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.420381 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.420387 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.420454 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.446119 1136586 cri.go:89] found id: ""
	I1208 01:59:09.446145 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.446154 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.446161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.446224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.470836 1136586 cri.go:89] found id: ""
	I1208 01:59:09.470859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.470867 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.470873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.470930 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.495896 1136586 cri.go:89] found id: ""
	I1208 01:59:09.495964 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.495988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.496000 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.496076 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.521109 1136586 cri.go:89] found id: ""
	I1208 01:59:09.521136 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.521145 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.521151 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.521211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.551629 1136586 cri.go:89] found id: ""
	I1208 01:59:09.551652 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.551668 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.551676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.551740 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.577446 1136586 cri.go:89] found id: ""
	I1208 01:59:09.577472 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.577481 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.577490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.577500 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.641466 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.641501 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.657574 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.657600 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.724794 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.724818 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:09.724830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:09.749729 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.749761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.285155 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.296049 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.296118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.325857 1136586 cri.go:89] found id: ""
	I1208 01:59:12.325891 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.325900 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.325907 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.325992 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.363392 1136586 cri.go:89] found id: ""
	I1208 01:59:12.363419 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.363428 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.363434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.363499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.392776 1136586 cri.go:89] found id: ""
	I1208 01:59:12.392803 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.392812 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.392817 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.392884 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.418895 1136586 cri.go:89] found id: ""
	I1208 01:59:12.418919 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.418928 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.418935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.418994 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.444923 1136586 cri.go:89] found id: ""
	I1208 01:59:12.444947 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.444960 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.444966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.445087 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.471912 1136586 cri.go:89] found id: ""
	I1208 01:59:12.471982 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.472006 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.472019 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.472093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.496844 1136586 cri.go:89] found id: ""
	I1208 01:59:12.496877 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.496886 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.496892 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.496966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.523601 1136586 cri.go:89] found id: ""
	I1208 01:59:12.523626 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.523635 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.523645 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.523656 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.581608 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.581646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.598560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.598638 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.666409 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.666430 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:12.666474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:12.692286 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.692321 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.220645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.234496 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.234563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.259957 1136586 cri.go:89] found id: ""
	I1208 01:59:15.259981 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.259991 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.259997 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.260059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.285880 1136586 cri.go:89] found id: ""
	I1208 01:59:15.285906 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.285915 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.285921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.285982 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.311506 1136586 cri.go:89] found id: ""
	I1208 01:59:15.311533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.311545 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.311552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.311615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.336490 1136586 cri.go:89] found id: ""
	I1208 01:59:15.336515 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.336524 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.336531 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.336590 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.365039 1136586 cri.go:89] found id: ""
	I1208 01:59:15.365064 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.365073 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.365079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.365143 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.399712 1136586 cri.go:89] found id: ""
	I1208 01:59:15.399740 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.399749 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.399756 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.399821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.427492 1136586 cri.go:89] found id: ""
	I1208 01:59:15.427517 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.427527 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.427533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.427599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.453022 1136586 cri.go:89] found id: ""
	I1208 01:59:15.453050 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.453059 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.453068 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.453081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.468204 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.468283 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.533761 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.533785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:15.533801 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:15.558879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.558914 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.593769 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.593794 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.158848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.169444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.169517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.195546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.195572 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.195581 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.195587 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.195649 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.220906 1136586 cri.go:89] found id: ""
	I1208 01:59:18.220928 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.220942 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.220948 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.221008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.248546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.248574 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.248584 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.248590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.248652 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.273450 1136586 cri.go:89] found id: ""
	I1208 01:59:18.273477 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.273486 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.273492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.273558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.298830 1136586 cri.go:89] found id: ""
	I1208 01:59:18.298857 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.298867 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.298874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.298936 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.328161 1136586 cri.go:89] found id: ""
	I1208 01:59:18.328182 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.328191 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.328198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.328258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.369715 1136586 cri.go:89] found id: ""
	I1208 01:59:18.369747 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.369756 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.369763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.369822 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.400838 1136586 cri.go:89] found id: ""
	I1208 01:59:18.400865 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.400874 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.400883 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:18.400913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:18.429677 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.429711 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.462210 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.462239 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.517535 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.517571 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.533236 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.533267 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.604338 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.106017 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.116977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.117060 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.145425 1136586 cri.go:89] found id: ""
	I1208 01:59:21.145503 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.145526 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.145544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.145633 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.169097 1136586 cri.go:89] found id: ""
	I1208 01:59:21.169125 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.169134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.169140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.169205 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.195045 1136586 cri.go:89] found id: ""
	I1208 01:59:21.195071 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.195081 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.195088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.195153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.221094 1136586 cri.go:89] found id: ""
	I1208 01:59:21.221128 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.221137 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.221144 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.221213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.247434 1136586 cri.go:89] found id: ""
	I1208 01:59:21.247457 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.247466 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.247472 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.247531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.278610 1136586 cri.go:89] found id: ""
	I1208 01:59:21.278633 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.278642 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.278648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.278712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.304567 1136586 cri.go:89] found id: ""
	I1208 01:59:21.304638 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.304654 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.304662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.304731 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.331211 1136586 cri.go:89] found id: ""
	I1208 01:59:21.331281 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.331304 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.331324 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.331355 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.392474 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.392509 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.413166 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.413192 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.491167 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.491190 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:21.491204 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:21.516454 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.516487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:24.050552 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:24.061833 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:24.061907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:24.089336 1136586 cri.go:89] found id: ""
	I1208 01:59:24.089363 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.089372 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:24.089380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:24.089442 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.115231 1136586 cri.go:89] found id: ""
	I1208 01:59:24.115256 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.115265 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.115272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.115347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.141479 1136586 cri.go:89] found id: ""
	I1208 01:59:24.141505 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.141515 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.141522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.141580 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.166759 1136586 cri.go:89] found id: ""
	I1208 01:59:24.166786 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.166795 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.166802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.166862 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.191431 1136586 cri.go:89] found id: ""
	I1208 01:59:24.191453 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.191462 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.191468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.191525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.216578 1136586 cri.go:89] found id: ""
	I1208 01:59:24.216618 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.216628 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.216635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.216708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.242316 1136586 cri.go:89] found id: ""
	I1208 01:59:24.242343 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.242352 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.242358 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.242420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.267328 1136586 cri.go:89] found id: ""
	I1208 01:59:24.267355 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.267365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.267375 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.267386 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.322866 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.322901 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.337393 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.337420 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.422627 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.422649 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:24.422662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:24.447517 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.447551 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.974915 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.985831 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.985904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:27.015934 1136586 cri.go:89] found id: ""
	I1208 01:59:27.015960 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.015970 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:27.015977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:27.016043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:27.042350 1136586 cri.go:89] found id: ""
	I1208 01:59:27.042376 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.042386 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:27.042400 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:27.042482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:27.068981 1136586 cri.go:89] found id: ""
	I1208 01:59:27.069007 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.069015 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:27.069021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:27.069086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.097058 1136586 cri.go:89] found id: ""
	I1208 01:59:27.097086 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.097095 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.097105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.097168 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.127221 1136586 cri.go:89] found id: ""
	I1208 01:59:27.127245 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.127253 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.127260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.127318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.152834 1136586 cri.go:89] found id: ""
	I1208 01:59:27.152859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.152869 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.152875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.152942 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.185563 1136586 cri.go:89] found id: ""
	I1208 01:59:27.185591 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.185600 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.185606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.185667 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.213022 1136586 cri.go:89] found id: ""
	I1208 01:59:27.213099 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.213125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.213147 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.213183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.272193 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.272229 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.289811 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.289892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.364663 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:27.364695 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:27.364720 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:27.392211 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.392286 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:29.931677 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.942629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.942709 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.971856 1136586 cri.go:89] found id: ""
	I1208 01:59:29.971882 1136586 logs.go:282] 0 containers: []
	W1208 01:59:29.971891 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.971898 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.971958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:30.000222 1136586 cri.go:89] found id: ""
	I1208 01:59:30.000248 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.000258 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:30.000265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:30.000330 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:30.039259 1136586 cri.go:89] found id: ""
	I1208 01:59:30.039285 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.039295 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:30.039301 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:30.039370 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:30.096203 1136586 cri.go:89] found id: ""
	I1208 01:59:30.096247 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.096258 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:30.096265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:30.096348 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.125007 1136586 cri.go:89] found id: ""
	I1208 01:59:30.125034 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.125044 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.125051 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.125138 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.155888 1136586 cri.go:89] found id: ""
	I1208 01:59:30.155914 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.155924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.155931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.155996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.183068 1136586 cri.go:89] found id: ""
	I1208 01:59:30.183104 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.183114 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.183121 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.183186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.211552 1136586 cri.go:89] found id: ""
	I1208 01:59:30.211577 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.211585 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.211601 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:30.211613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:30.238738 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.238789 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.272245 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.272275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.331871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.331909 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:30.349711 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.349742 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.428964 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:32.929192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.940100 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.940183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.963581 1136586 cri.go:89] found id: ""
	I1208 01:59:32.963602 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.963611 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.963617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.963678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.992028 1136586 cri.go:89] found id: ""
	I1208 01:59:32.992054 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.992063 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.992069 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.992130 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:33.023809 1136586 cri.go:89] found id: ""
	I1208 01:59:33.023836 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.023846 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:33.023852 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:33.023919 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:33.048510 1136586 cri.go:89] found id: ""
	I1208 01:59:33.048533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.048541 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:33.048548 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:33.048608 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:33.075068 1136586 cri.go:89] found id: ""
	I1208 01:59:33.075096 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.075106 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:33.075113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:33.075173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.099238 1136586 cri.go:89] found id: ""
	I1208 01:59:33.099264 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.099273 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.099280 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.099345 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.123805 1136586 cri.go:89] found id: ""
	I1208 01:59:33.123831 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.123840 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.123846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.123905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.152142 1136586 cri.go:89] found id: ""
	I1208 01:59:33.152166 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.152175 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.152184 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.152195 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:33.210457 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.210492 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.225387 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.225415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.288797 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.288820 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:33.288834 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:33.314642 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.314675 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:35.847043 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.865523 1136586 out.go:203] 
	W1208 01:59:35.868530 1136586 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 01:59:35.868757 1136586 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 01:59:35.868776 1136586 out.go:285] * Related issues:
	* Related issues:
	W1208 01:59:35.868792 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1208 01:59:35.868833 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1208 01:59:35.873508 1136586 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-457779
helpers_test.go:243: (dbg) docker inspect newest-cni-457779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	        "Created": "2025-12-08T01:43:39.768991386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1136714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:53:27.037311302Z",
	            "FinishedAt": "2025-12-08T01:53:25.665351923Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hostname",
	        "HostsPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hosts",
	        "LogPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515-json.log",
	        "Name": "/newest-cni-457779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-457779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-457779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	                "LowerDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-457779",
	                "Source": "/var/lib/docker/volumes/newest-cni-457779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-457779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-457779",
	                "name.minikube.sigs.k8s.io": "newest-cni-457779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1a947731c9f343bfc621f32c5e5e6b87b4d6596e40159c82f35b05d4b004c86",
	            "SandboxKey": "/var/run/docker/netns/a1a947731c9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-457779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d0:aa:7b:8e:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e759035a3431798f7b6fae1fcd872afa7240c356fb1da4c53589714768a6edc3",
	                    "EndpointID": "88ca36c415275c64fba1e1779bb8c75173dfd0b7a6e82aa393b48ff675c0db50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-457779",
	                        "638bfd2d42fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (341.308116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25: (1.625654291s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-719683 image list --format=json                                                                                                                                                                                                                │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ pause   │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ unpause │ -p embed-certs-719683 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p no-preload-536520 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ start   │ -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:51 UTC │                     │
	│ stop    │ -p newest-cni-457779 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-457779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:53:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:53:26.756000 1136586 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:53:26.756538 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756548 1136586 out.go:374] Setting ErrFile to fd 2...
	I1208 01:53:26.756553 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756842 1136586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:53:26.757268 1136586 out.go:368] Setting JSON to false
	I1208 01:53:26.758219 1136586 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23760,"bootTime":1765135047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:53:26.758285 1136586 start.go:143] virtualization:  
	I1208 01:53:26.761027 1136586 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:53:26.763300 1136586 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:53:26.763385 1136586 notify.go:221] Checking for updates...
	I1208 01:53:26.769236 1136586 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:53:26.772301 1136586 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:26.775351 1136586 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:53:26.778370 1136586 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:53:26.781331 1136586 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:53:26.784939 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:26.785587 1136586 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:53:26.821497 1136586 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:53:26.821612 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.884858 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.874574541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.884969 1136586 docker.go:319] overlay module found
	I1208 01:53:26.888166 1136586 out.go:179] * Using the docker driver based on existing profile
	I1208 01:53:26.891132 1136586 start.go:309] selected driver: docker
	I1208 01:53:26.891162 1136586 start.go:927] validating driver "docker" against &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.891271 1136586 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:53:26.892009 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.946578 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.937487208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.946934 1136586 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:53:26.946970 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:26.947032 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:26.947088 1136586 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.951997 1136586 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:53:26.954840 1136586 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:53:26.957745 1136586 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:53:26.960653 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:26.960709 1136586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:53:26.960722 1136586 cache.go:65] Caching tarball of preloaded images
	I1208 01:53:26.960734 1136586 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:53:26.960819 1136586 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:53:26.960831 1136586 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:53:26.961033 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:26.980599 1136586 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:53:26.980630 1136586 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:53:26.980646 1136586 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:53:26.980676 1136586 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:53:26.980741 1136586 start.go:364] duration metric: took 41.994µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:53:26.980766 1136586 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:53:26.980775 1136586 fix.go:54] fixHost starting: 
	I1208 01:53:26.981064 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:26.998167 1136586 fix.go:112] recreateIfNeeded on newest-cni-457779: state=Stopped err=<nil>
	W1208 01:53:26.998205 1136586 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:53:25.593347 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:27.593483 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:30.093460 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:27.003360 1136586 out.go:252] * Restarting existing docker container for "newest-cni-457779" ...
	I1208 01:53:27.003497 1136586 cli_runner.go:164] Run: docker start newest-cni-457779
	I1208 01:53:27.261076 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:27.282732 1136586 kic.go:430] container "newest-cni-457779" state is running.
	I1208 01:53:27.283122 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:27.311045 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:27.311287 1136586 machine.go:94] provisionDockerMachine start ...
	I1208 01:53:27.311346 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:27.335078 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:27.335680 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:27.335692 1136586 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:53:27.336739 1136586 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:53:30.502303 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.502328 1136586 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:53:30.502403 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.520473 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.520821 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.520832 1136586 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:53:30.680340 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.680522 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.698887 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.699207 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.699230 1136586 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:53:30.850881 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:53:30.850907 1136586 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:53:30.850931 1136586 ubuntu.go:190] setting up certificates
	I1208 01:53:30.850939 1136586 provision.go:84] configureAuth start
	I1208 01:53:30.851000 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:30.868852 1136586 provision.go:143] copyHostCerts
	I1208 01:53:30.868925 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:53:30.868935 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:53:30.869018 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:53:30.869113 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:53:30.869119 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:53:30.869143 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:53:30.869192 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:53:30.869197 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:53:30.869218 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:53:30.869262 1136586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:53:31.146721 1136586 provision.go:177] copyRemoteCerts
	I1208 01:53:31.146819 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:53:31.146887 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.165202 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.270344 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:53:31.288520 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:53:31.307009 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:53:31.325139 1136586 provision.go:87] duration metric: took 474.176778ms to configureAuth
	I1208 01:53:31.325166 1136586 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:53:31.325413 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:31.325428 1136586 machine.go:97] duration metric: took 4.014132188s to provisionDockerMachine
	I1208 01:53:31.325438 1136586 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:53:31.325453 1136586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:53:31.325527 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:53:31.325572 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.342958 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.450484 1136586 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:53:31.453930 1136586 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:53:31.453961 1136586 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:53:31.453978 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:53:31.454035 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:53:31.454126 1136586 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:53:31.454236 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:53:31.461814 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:31.480492 1136586 start.go:296] duration metric: took 155.029827ms for postStartSetup
	I1208 01:53:31.480576 1136586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:53:31.480620 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.498567 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.608416 1136586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:53:31.613302 1136586 fix.go:56] duration metric: took 4.632518901s for fixHost
	I1208 01:53:31.613327 1136586 start.go:83] releasing machines lock for "newest-cni-457779", held for 4.632572375s
	I1208 01:53:31.613414 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:31.630699 1136586 ssh_runner.go:195] Run: cat /version.json
	I1208 01:53:31.630750 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.630785 1136586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:53:31.630847 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.650759 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.653824 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.754273 1136586 ssh_runner.go:195] Run: systemctl --version
	I1208 01:53:31.849639 1136586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:53:31.855754 1136586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:53:31.855850 1136586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:53:31.866557 1136586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:53:31.866588 1136586 start.go:496] detecting cgroup driver to use...
	I1208 01:53:31.866621 1136586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:53:31.866707 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:53:31.887994 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:53:31.906727 1136586 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:53:31.906830 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:53:31.922954 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:53:31.936664 1136586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:53:32.054316 1136586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:53:32.173483 1136586 docker.go:234] disabling docker service ...
	I1208 01:53:32.173578 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:53:32.189444 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:53:32.206742 1136586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:53:32.325262 1136586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:53:32.443602 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:53:32.456770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:53:32.473213 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:53:32.483724 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:53:32.493138 1136586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:53:32.493251 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:53:32.502652 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.512217 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:53:32.521333 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.530989 1136586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:53:32.539889 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:53:32.549127 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:53:32.558425 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:53:32.567684 1136586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:53:32.575542 1136586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:53:32.583139 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:32.723777 1136586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:53:32.846014 1136586 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:53:32.846088 1136586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:53:32.849865 1136586 start.go:564] Will wait 60s for crictl version
	I1208 01:53:32.849924 1136586 ssh_runner.go:195] Run: which crictl
	I1208 01:53:32.853562 1136586 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:53:32.880330 1136586 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:53:32.880452 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.901579 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.928462 1136586 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:53:32.931363 1136586 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:53:32.945897 1136586 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:53:32.950021 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:32.963090 1136586 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1208 01:53:32.593363 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:33.093099 1128548 node_ready.go:38] duration metric: took 6m0.00024354s for node "no-preload-536520" to be "Ready" ...
	I1208 01:53:33.096356 1128548 out.go:203] 
	W1208 01:53:33.099424 1128548 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:53:33.099449 1128548 out.go:285] * 
	W1208 01:53:33.101601 1128548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:53:33.103637 1128548 out.go:203] 
	I1208 01:53:32.966006 1136586 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:53:32.966181 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:32.966277 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.001671 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.001709 1136586 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:53:33.001783 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.037763 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.037789 1136586 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:53:33.037796 1136586 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:53:33.037895 1136586 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:53:33.037971 1136586 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:53:33.063762 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:33.063790 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:33.063814 1136586 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:53:33.063838 1136586 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:53:33.063976 1136586 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:53:33.064046 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:53:33.072124 1136586 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:53:33.072199 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:53:33.079978 1136586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:53:33.094440 1136586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:53:33.114285 1136586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:53:33.148370 1136586 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:53:33.154333 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:33.175383 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:33.368419 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:33.425889 1136586 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:53:33.425915 1136586 certs.go:195] generating shared ca certs ...
	I1208 01:53:33.425933 1136586 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:33.426101 1136586 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:53:33.426153 1136586 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:53:33.426161 1136586 certs.go:257] generating profile certs ...
	I1208 01:53:33.426267 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:53:33.426332 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:53:33.426377 1136586 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:53:33.426524 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:53:33.426568 1136586 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:53:33.426582 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:53:33.426612 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:53:33.426642 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:53:33.426669 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:53:33.426734 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:33.427335 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:53:33.467362 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:53:33.494653 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:53:33.520274 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:53:33.539143 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:53:33.558359 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:53:33.583585 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:53:33.606437 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:53:33.629051 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:53:33.649569 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:53:33.670329 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:53:33.709388 1136586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:53:33.723127 1136586 ssh_runner.go:195] Run: openssl version
	I1208 01:53:33.729848 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.737400 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:53:33.744968 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749630 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749695 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.792574 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:53:33.800140 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.812741 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:53:33.821534 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825755 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825831 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.873472 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:53:33.882187 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.890767 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:53:33.901446 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907874 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907943 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.952061 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:53:33.960568 1136586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:53:33.965214 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:53:34.008563 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:53:34.055484 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:53:34.112335 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:53:34.165388 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:53:34.216189 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:53:34.263034 1136586 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:34.263135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:53:34.263235 1136586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:53:34.294120 1136586 cri.go:89] found id: ""
	I1208 01:53:34.294243 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:53:34.304846 1136586 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:53:34.304879 1136586 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:53:34.304960 1136586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:53:34.316473 1136586 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:53:34.317189 1136586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.317527 1136586 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-457779" cluster setting kubeconfig missing "newest-cni-457779" context setting]
	I1208 01:53:34.318043 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.319993 1136586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:53:34.332564 1136586 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:53:34.332599 1136586 kubeadm.go:602] duration metric: took 27.712722ms to restartPrimaryControlPlane
	I1208 01:53:34.332638 1136586 kubeadm.go:403] duration metric: took 69.60712ms to StartCluster
	I1208 01:53:34.332662 1136586 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.332751 1136586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.333761 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.334050 1136586 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:53:34.334509 1136586 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:53:34.334590 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:34.334604 1136586 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-457779"
	I1208 01:53:34.334619 1136586 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-457779"
	I1208 01:53:34.334646 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.334654 1136586 addons.go:70] Setting dashboard=true in profile "newest-cni-457779"
	I1208 01:53:34.334664 1136586 addons.go:239] Setting addon dashboard=true in "newest-cni-457779"
	W1208 01:53:34.334680 1136586 addons.go:248] addon dashboard should already be in state true
	I1208 01:53:34.334701 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.335128 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.335222 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.338384 1136586 out.go:179] * Verifying Kubernetes components...
	I1208 01:53:34.338808 1136586 addons.go:70] Setting default-storageclass=true in profile "newest-cni-457779"
	I1208 01:53:34.338830 1136586 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-457779"
	I1208 01:53:34.339192 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.342236 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:34.384696 1136586 addons.go:239] Setting addon default-storageclass=true in "newest-cni-457779"
	I1208 01:53:34.384738 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.385173 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.395531 1136586 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:53:34.398489 1136586 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:53:34.401766 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:53:34.401802 1136586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:53:34.401870 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.413624 1136586 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:53:34.416611 1136586 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.416635 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:53:34.416703 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.446412 1136586 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.446432 1136586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:53:34.446519 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.468661 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.486870 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.495400 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.648143 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:34.791310 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:53:34.791383 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:53:34.801259 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.809204 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.852787 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:53:34.852815 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:53:34.976510 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:53:34.976546 1136586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:53:35.059518 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:53:35.059546 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:53:35.081694 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:53:35.081725 1136586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:53:35.097221 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:53:35.097249 1136586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:53:35.113396 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:53:35.113423 1136586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:53:35.128309 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:53:35.128332 1136586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:53:35.144063 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.144088 1136586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:53:35.163973 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.343568 1136586 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:53:35.343639 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:35.343728 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343749 1136586 retry.go:31] will retry after 313.237886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343796 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343802 1136586 retry.go:31] will retry after 267.065812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343986 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343999 1136586 retry.go:31] will retry after 357.870271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.611924 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:35.657423 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:35.685479 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.685507 1136586 retry.go:31] will retry after 235.819569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.702853 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:35.745089 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.745200 1136586 retry.go:31] will retry after 496.615001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.783116 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.783150 1136586 retry.go:31] will retry after 415.603405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.844207 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:35.922577 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:35.992239 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.992284 1136586 retry.go:31] will retry after 419.233092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.199657 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.242360 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.275822 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.275881 1136586 retry.go:31] will retry after 506.304834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:36.313961 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.313996 1136586 retry.go:31] will retry after 341.203132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.344211 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:36.412076 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:36.475666 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.475724 1136586 retry.go:31] will retry after 757.567155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.656038 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.717469 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.717504 1136586 retry.go:31] will retry after 858.45693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.782939 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.844509 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:36.857199 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.857314 1136586 retry.go:31] will retry after 1.254351113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.233554 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:37.293681 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.293719 1136586 retry.go:31] will retry after 1.120312347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.343808 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:37.576883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:37.657137 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.657170 1136586 retry.go:31] will retry after 1.273828893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.844396 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.111904 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:38.175735 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.175771 1136586 retry.go:31] will retry after 1.371961744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.344170 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.414206 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:38.473557 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.473592 1136586 retry.go:31] will retry after 1.305474532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.843968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.931790 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:38.991073 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.991107 1136586 retry.go:31] will retry after 2.323329318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.344538 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:39.548354 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:39.614499 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.614532 1136586 retry.go:31] will retry after 2.345376349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.779883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:39.839516 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.839550 1136586 retry.go:31] will retry after 1.632764803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.843744 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.343857 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.844131 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.314885 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:41.344468 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:41.399054 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.399086 1136586 retry.go:31] will retry after 1.628703977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.473438 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:41.539567 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.539608 1136586 retry.go:31] will retry after 4.6526683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.844314 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.960631 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:42.037435 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.037475 1136586 retry.go:31] will retry after 2.24839836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.343723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:42.843913 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.028344 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:43.092228 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.092267 1136586 retry.go:31] will retry after 6.138872071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.843812 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:44.286696 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:44.343910 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:44.363154 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.363184 1136586 retry.go:31] will retry after 4.885412288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.843802 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.344023 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.844504 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.193193 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:46.256318 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.256352 1136586 retry.go:31] will retry after 6.576205276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.344576 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.844679 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.344358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.843925 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.231766 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:49.249321 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:49.295577 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.295606 1136586 retry.go:31] will retry after 5.897796539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:49.321879 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.321913 1136586 retry.go:31] will retry after 5.135606393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.343793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.843777 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.844708 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.344109 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.844601 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.344090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.833191 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:52.843854 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:52.942603 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:52.942641 1136586 retry.go:31] will retry after 10.350172314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:53.344347 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:53.843800 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.343948 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.457681 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:54.519827 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.519864 1136586 retry.go:31] will retry after 12.267694675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.844117 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.193625 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:55.256579 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.256612 1136586 retry.go:31] will retry after 11.163170119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.343847 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.843783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.343814 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.844654 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.344616 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.844487 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.343880 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.843787 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.343848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.843826 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.343799 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.844518 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.343861 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.844575 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.343756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.844391 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:03.293666 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:54:03.344443 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:03.397612 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.397650 1136586 retry.go:31] will retry after 19.276295687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.844417 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.343968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.344710 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.843828 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.420172 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:06.484485 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.484519 1136586 retry.go:31] will retry after 9.376809348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.788188 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:54:06.843694 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:06.852042 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.852079 1136586 retry.go:31] will retry after 14.243902866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:07.344022 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:07.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.344592 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.844723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.344453 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.843950 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.344400 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.844496 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.343717 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.844737 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.344750 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.843793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.343904 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.343908 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.844260 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.344591 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.843791 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.862033 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:15.923558 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:15.923598 1136586 retry.go:31] will retry after 11.623443237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:16.344246 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:16.844386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.344635 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.344732 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.843932 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.344121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.844530 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.344183 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.844204 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.097241 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:21.169765 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.169803 1136586 retry.go:31] will retry after 14.268049825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.343856 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.844672 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.344587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.674615 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:22.733064 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.733093 1136586 retry.go:31] will retry after 25.324201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.844513 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.344392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.844423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.343928 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.844484 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.344404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.844721 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.344197 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.844678 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.343798 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.547765 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:27.612562 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.612601 1136586 retry.go:31] will retry after 28.822296594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.344385 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.344796 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.344407 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.844544 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.343765 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.844221 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.343845 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.844333 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.344526 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.844321 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:34.344033 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:34.344149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:34.370172 1136586 cri.go:89] found id: ""
	I1208 01:54:34.370196 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.370205 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:34.370211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:34.370269 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:34.395619 1136586 cri.go:89] found id: ""
	I1208 01:54:34.395642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.395650 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:34.395656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:34.395720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:34.422963 1136586 cri.go:89] found id: ""
	I1208 01:54:34.422993 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.423003 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:34.423009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:34.423074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:34.451846 1136586 cri.go:89] found id: ""
	I1208 01:54:34.451871 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.451879 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:34.451886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:34.451951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:34.480597 1136586 cri.go:89] found id: ""
	I1208 01:54:34.480622 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.480631 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:34.480638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:34.480728 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:34.505381 1136586 cri.go:89] found id: ""
	I1208 01:54:34.505412 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.505421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:34.505427 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:34.505486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:34.531276 1136586 cri.go:89] found id: ""
	I1208 01:54:34.531304 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.531313 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:34.531320 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:34.531384 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:34.556518 1136586 cri.go:89] found id: ""
	I1208 01:54:34.556542 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.556550 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:34.556566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:34.556578 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:34.613370 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:34.613408 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:34.628308 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:34.628338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:34.694181 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:34.694202 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:34.694216 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:34.720374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:34.720425 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:35.438126 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:35.498508 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:35.498543 1136586 retry.go:31] will retry after 43.888808015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:37.252653 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:37.264309 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:37.264385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:37.296827 1136586 cri.go:89] found id: ""
	I1208 01:54:37.296856 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.296865 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:37.296872 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:37.296938 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:37.322795 1136586 cri.go:89] found id: ""
	I1208 01:54:37.322818 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.322826 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:37.322832 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:37.322890 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:37.347015 1136586 cri.go:89] found id: ""
	I1208 01:54:37.347039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.347048 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:37.347054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:37.347112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:37.376654 1136586 cri.go:89] found id: ""
	I1208 01:54:37.376685 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.376694 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:37.376702 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:37.376768 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:37.402392 1136586 cri.go:89] found id: ""
	I1208 01:54:37.402419 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.402428 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:37.402434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:37.402531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:37.427265 1136586 cri.go:89] found id: ""
	I1208 01:54:37.427292 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.427302 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:37.427308 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:37.427375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:37.452009 1136586 cri.go:89] found id: ""
	I1208 01:54:37.452036 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.452046 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:37.452052 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:37.452113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:37.478250 1136586 cri.go:89] found id: ""
	I1208 01:54:37.478274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.478282 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:37.478292 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:37.478303 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:37.492990 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:37.493059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:37.560010 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:37.560033 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:37.560046 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:37.586791 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:37.586827 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:37.617527 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:37.617603 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.174865 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:40.187458 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:40.187538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:40.216164 1136586 cri.go:89] found id: ""
	I1208 01:54:40.216195 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.216204 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:40.216211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:40.216280 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:40.243524 1136586 cri.go:89] found id: ""
	I1208 01:54:40.243552 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.243561 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:40.243567 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:40.243632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:40.273554 1136586 cri.go:89] found id: ""
	I1208 01:54:40.273582 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.273592 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:40.273598 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:40.273660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:40.301228 1136586 cri.go:89] found id: ""
	I1208 01:54:40.301249 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.301257 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:40.301263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:40.301321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:40.330159 1136586 cri.go:89] found id: ""
	I1208 01:54:40.330179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.330187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:40.330193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:40.330252 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:40.355514 1136586 cri.go:89] found id: ""
	I1208 01:54:40.355583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.355604 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:40.355611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:40.355685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:40.381442 1136586 cri.go:89] found id: ""
	I1208 01:54:40.381468 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.381477 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:40.381483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:40.381539 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:40.406014 1136586 cri.go:89] found id: ""
	I1208 01:54:40.406039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.406048 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:40.406057 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:40.406069 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:40.465966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:40.465986 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:40.466000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:40.490766 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:40.490799 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:40.518111 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:40.518140 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.573667 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:40.573702 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.088883 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:43.112185 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:43.112253 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:43.175929 1136586 cri.go:89] found id: ""
	I1208 01:54:43.175952 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.175960 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:43.175966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:43.176037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:43.208920 1136586 cri.go:89] found id: ""
	I1208 01:54:43.208946 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.208955 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:43.208961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:43.209024 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:43.235210 1136586 cri.go:89] found id: ""
	I1208 01:54:43.235235 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.235245 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:43.235252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:43.235319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:43.263618 1136586 cri.go:89] found id: ""
	I1208 01:54:43.263642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.263658 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:43.263666 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:43.263727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:43.290748 1136586 cri.go:89] found id: ""
	I1208 01:54:43.290783 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.290792 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:43.290798 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:43.290857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:43.314874 1136586 cri.go:89] found id: ""
	I1208 01:54:43.314898 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.314906 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:43.314913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:43.314975 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:43.339655 1136586 cri.go:89] found id: ""
	I1208 01:54:43.339680 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.339707 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:43.339713 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:43.339777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:43.364203 1136586 cri.go:89] found id: ""
	I1208 01:54:43.364230 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.364240 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:43.364250 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:43.364261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:43.390041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:43.390079 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:43.420626 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:43.420661 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:43.475834 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:43.475876 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.491658 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:43.491696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:43.559609 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.059911 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:46.070737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:46.070825 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:46.110556 1136586 cri.go:89] found id: ""
	I1208 01:54:46.110583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.110593 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:46.110600 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:46.110665 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:46.186917 1136586 cri.go:89] found id: ""
	I1208 01:54:46.186942 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.186951 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:46.186957 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:46.187021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:46.212604 1136586 cri.go:89] found id: ""
	I1208 01:54:46.212631 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.212639 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:46.212646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:46.212724 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:46.239989 1136586 cri.go:89] found id: ""
	I1208 01:54:46.240043 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.240054 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:46.240060 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:46.240217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:46.266799 1136586 cri.go:89] found id: ""
	I1208 01:54:46.266829 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.266839 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:46.266845 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:46.266918 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:46.294724 1136586 cri.go:89] found id: ""
	I1208 01:54:46.294753 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.294762 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:46.294769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:46.294829 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:46.320725 1136586 cri.go:89] found id: ""
	I1208 01:54:46.320754 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.320764 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:46.320771 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:46.320854 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:46.350768 1136586 cri.go:89] found id: ""
	I1208 01:54:46.350792 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.350801 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:46.350810 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:46.350822 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:46.416454 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.416490 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:46.416510 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:46.442082 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:46.442115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:46.474546 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:46.474573 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:46.532104 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:46.532141 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:48.057590 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:48.120301 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:48.120337 1136586 retry.go:31] will retry after 17.544839516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:49.047527 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:49.058154 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.058224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.087906 1136586 cri.go:89] found id: ""
	I1208 01:54:49.087974 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.087999 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.088010 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.088086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.147486 1136586 cri.go:89] found id: ""
	I1208 01:54:49.147562 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.147585 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.147603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.147699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.190637 1136586 cri.go:89] found id: ""
	I1208 01:54:49.190712 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.190735 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.190755 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.190842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.222497 1136586 cri.go:89] found id: ""
	I1208 01:54:49.222525 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.222534 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.222549 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.222624 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.247026 1136586 cri.go:89] found id: ""
	I1208 01:54:49.247052 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.247061 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.247067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.247125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:49.275349 1136586 cri.go:89] found id: ""
	I1208 01:54:49.275378 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.275387 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:49.275394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:49.275499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:49.300792 1136586 cri.go:89] found id: ""
	I1208 01:54:49.300820 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.300829 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:49.300835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:49.300892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:49.325853 1136586 cri.go:89] found id: ""
	I1208 01:54:49.325882 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.325890 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:49.325900 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:49.325912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:49.384418 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:49.384468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:49.399275 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:49.399307 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:49.466718 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:49.466785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:49.466814 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:49.491769 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:49.491803 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:52.023420 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:52.034753 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:52.034828 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:52.064923 1136586 cri.go:89] found id: ""
	I1208 01:54:52.064945 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.064953 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:52.064960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:52.065022 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:52.104945 1136586 cri.go:89] found id: ""
	I1208 01:54:52.104968 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.104977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:52.104983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:52.105043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:52.171374 1136586 cri.go:89] found id: ""
	I1208 01:54:52.171395 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.171404 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:52.171410 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:52.171468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:52.201431 1136586 cri.go:89] found id: ""
	I1208 01:54:52.201476 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.201485 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:52.201492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:52.201563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:52.226892 1136586 cri.go:89] found id: ""
	I1208 01:54:52.226920 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.226929 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:52.226935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:52.227001 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:52.252811 1136586 cri.go:89] found id: ""
	I1208 01:54:52.252891 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.252914 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:52.252935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:52.253034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:52.282156 1136586 cri.go:89] found id: ""
	I1208 01:54:52.282179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.282188 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:52.282195 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:52.282259 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:52.308580 1136586 cri.go:89] found id: ""
	I1208 01:54:52.308607 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.308618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:52.308628 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:52.308639 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:52.364992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:52.365028 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:52.379850 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:52.379877 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:52.445238 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:52.445260 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:52.445273 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:52.471470 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:52.471505 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:55.003548 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:55.026046 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:55.026131 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:55.053887 1136586 cri.go:89] found id: ""
	I1208 01:54:55.053964 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.053989 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:55.054009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:55.054101 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:55.088698 1136586 cri.go:89] found id: ""
	I1208 01:54:55.088724 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.088733 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:55.088760 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:55.088849 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:55.170740 1136586 cri.go:89] found id: ""
	I1208 01:54:55.170776 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.170785 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:55.170791 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:55.170899 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:55.197620 1136586 cri.go:89] found id: ""
	I1208 01:54:55.197656 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.197666 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:55.197690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:55.197776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:55.223553 1136586 cri.go:89] found id: ""
	I1208 01:54:55.223580 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.223589 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:55.223595 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:55.223680 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:55.248608 1136586 cri.go:89] found id: ""
	I1208 01:54:55.248677 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.248692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:55.248699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:55.248765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:55.274165 1136586 cri.go:89] found id: ""
	I1208 01:54:55.274232 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.274254 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:55.274272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:55.274361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:55.300558 1136586 cri.go:89] found id: ""
	I1208 01:54:55.300590 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.300600 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:55.300611 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:55.300622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:55.360386 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:55.360422 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:55.375869 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:55.375899 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:55.447970 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:55.447993 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:55.448005 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:55.473774 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:55.473808 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:56.435194 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:56.498425 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:54:56.498545 1136586 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:54:58.006121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:58.018380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:58.018521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:58.045144 1136586 cri.go:89] found id: ""
	I1208 01:54:58.045180 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.045189 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:58.045211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:58.045296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:58.071125 1136586 cri.go:89] found id: ""
	I1208 01:54:58.071151 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.071160 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:58.071167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:58.071226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:58.121465 1136586 cri.go:89] found id: ""
	I1208 01:54:58.121492 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.121511 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:58.121519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:58.121589 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:58.182249 1136586 cri.go:89] found id: ""
	I1208 01:54:58.182274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.182282 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:58.182288 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:58.182350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:58.211355 1136586 cri.go:89] found id: ""
	I1208 01:54:58.211380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.211389 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:58.211395 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:58.211458 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:58.239234 1136586 cri.go:89] found id: ""
	I1208 01:54:58.239262 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.239271 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:58.239278 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:58.239338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:58.268137 1136586 cri.go:89] found id: ""
	I1208 01:54:58.268212 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.268227 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:58.268235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:58.268311 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:58.298356 1136586 cri.go:89] found id: ""
	I1208 01:54:58.298380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.298389 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:58.298399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:58.298483 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:58.356947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:58.356983 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:58.371448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:58.371475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:58.435566 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:58.435589 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:58.435602 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:58.460122 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:58.460156 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:00.988330 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:00.999374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:00.999446 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:01.036571 1136586 cri.go:89] found id: ""
	I1208 01:55:01.036650 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.036687 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:01.036714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:01.036792 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:01.062231 1136586 cri.go:89] found id: ""
	I1208 01:55:01.062257 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.062267 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:01.062274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:01.062333 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:01.087570 1136586 cri.go:89] found id: ""
	I1208 01:55:01.087592 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.087601 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:01.087608 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:01.087668 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:01.137796 1136586 cri.go:89] found id: ""
	I1208 01:55:01.137822 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.137831 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:01.137838 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:01.137905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:01.193217 1136586 cri.go:89] found id: ""
	I1208 01:55:01.193240 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.193249 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:01.193256 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:01.193322 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:01.225114 1136586 cri.go:89] found id: ""
	I1208 01:55:01.225191 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.225217 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:01.225236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:01.225335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:01.253406 1136586 cri.go:89] found id: ""
	I1208 01:55:01.253485 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.253510 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:01.253529 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:01.253641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:01.279950 1136586 cri.go:89] found id: ""
	I1208 01:55:01.280032 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.280058 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:01.280077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:01.280102 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:01.314699 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:01.314731 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:01.371902 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:01.371941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:01.387482 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:01.387511 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:01.454737 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:01.454761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:01.454775 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:03.982003 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:03.993616 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:03.993689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:04.022115 1136586 cri.go:89] found id: ""
	I1208 01:55:04.022143 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.022152 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:04.022162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:04.022228 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:04.052694 1136586 cri.go:89] found id: ""
	I1208 01:55:04.052720 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.052730 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:04.052737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:04.052799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:04.077702 1136586 cri.go:89] found id: ""
	I1208 01:55:04.077728 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.077737 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:04.077750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:04.077812 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:04.141633 1136586 cri.go:89] found id: ""
	I1208 01:55:04.141668 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.141677 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:04.141683 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:04.141753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:04.188894 1136586 cri.go:89] found id: ""
	I1208 01:55:04.188962 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.188976 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:04.188983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:04.189051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:04.218926 1136586 cri.go:89] found id: ""
	I1208 01:55:04.218951 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.218960 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:04.218966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:04.219028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:04.244759 1136586 cri.go:89] found id: ""
	I1208 01:55:04.244786 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.244795 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:04.244802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:04.244885 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:04.270311 1136586 cri.go:89] found id: ""
	I1208 01:55:04.270337 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.270346 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:04.270377 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:04.270396 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:04.298563 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:04.298594 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:04.357076 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:04.357110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:04.372213 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:04.372255 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:04.437142 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:04.437163 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:04.437176 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:05.665650 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:55:05.727737 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:05.727864 1136586 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:06.963817 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:06.974536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:06.974639 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:06.999437 1136586 cri.go:89] found id: ""
	I1208 01:55:06.999466 1136586 logs.go:282] 0 containers: []
	W1208 01:55:06.999475 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:06.999481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:06.999540 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:07.029225 1136586 cri.go:89] found id: ""
	I1208 01:55:07.029253 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.029262 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:07.029274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:07.029343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:07.058657 1136586 cri.go:89] found id: ""
	I1208 01:55:07.058683 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.058692 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:07.058698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:07.058757 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:07.090130 1136586 cri.go:89] found id: ""
	I1208 01:55:07.090158 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.090168 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:07.090175 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:07.090236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:07.139122 1136586 cri.go:89] found id: ""
	I1208 01:55:07.139177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.139187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:07.139194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:07.139261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:07.172306 1136586 cri.go:89] found id: ""
	I1208 01:55:07.172328 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.172336 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:07.172343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:07.172400 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:07.204660 1136586 cri.go:89] found id: ""
	I1208 01:55:07.204689 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.204698 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:07.204705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:07.204764 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:07.230319 1136586 cri.go:89] found id: ""
	I1208 01:55:07.230349 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.230358 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:07.230368 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:07.230380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:07.285979 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:07.286015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:07.301365 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:07.301391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:07.369069 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:07.369140 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:07.369161 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:07.394018 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:07.394051 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:09.924985 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:09.935805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:09.935908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:09.962622 1136586 cri.go:89] found id: ""
	I1208 01:55:09.962647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.962656 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:09.962662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:09.962729 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:09.988243 1136586 cri.go:89] found id: ""
	I1208 01:55:09.988266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.988275 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:09.988283 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:09.988347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:10.019449 1136586 cri.go:89] found id: ""
	I1208 01:55:10.019482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.019492 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:10.019499 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:10.019570 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:10.048613 1136586 cri.go:89] found id: ""
	I1208 01:55:10.048637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.048646 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:10.048652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:10.048726 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:10.080915 1136586 cri.go:89] found id: ""
	I1208 01:55:10.080940 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.080949 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:10.080956 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:10.081021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:10.144352 1136586 cri.go:89] found id: ""
	I1208 01:55:10.144375 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.144384 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:10.144396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:10.144479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:10.182563 1136586 cri.go:89] found id: ""
	I1208 01:55:10.182586 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.182595 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:10.182601 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:10.182662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:10.213649 1136586 cri.go:89] found id: ""
	I1208 01:55:10.213682 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.213694 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:10.213706 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:10.213724 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:10.242084 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:10.242114 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:10.298146 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:10.298181 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:10.313543 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:10.313574 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:10.380205 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:10.380228 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:10.380248 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:12.905658 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:12.916576 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:12.916648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:12.944122 1136586 cri.go:89] found id: ""
	I1208 01:55:12.944146 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.944155 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:12.944161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:12.944222 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:12.969438 1136586 cri.go:89] found id: ""
	I1208 01:55:12.969464 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.969473 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:12.969481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:12.969542 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:12.997359 1136586 cri.go:89] found id: ""
	I1208 01:55:12.997388 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.997397 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:12.997403 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:12.997470 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:13.025718 1136586 cri.go:89] found id: ""
	I1208 01:55:13.025746 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.025756 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:13.025763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:13.025823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:13.056865 1136586 cri.go:89] found id: ""
	I1208 01:55:13.056892 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.056902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:13.056908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:13.056969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:13.082432 1136586 cri.go:89] found id: ""
	I1208 01:55:13.082528 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.082546 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:13.082554 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:13.082626 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:13.125069 1136586 cri.go:89] found id: ""
	I1208 01:55:13.125144 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.125168 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:13.125187 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:13.125272 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:13.178384 1136586 cri.go:89] found id: ""
	I1208 01:55:13.178482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.178507 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:13.178529 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:13.178567 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:13.239609 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:13.239644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:13.256212 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:13.256240 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:13.323842 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:13.323920 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:13.323949 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:13.348533 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:13.348570 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:15.879223 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:15.890243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:15.890364 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:15.914857 1136586 cri.go:89] found id: ""
	I1208 01:55:15.914886 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.914894 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:15.914901 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:15.914960 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:15.939097 1136586 cri.go:89] found id: ""
	I1208 01:55:15.939123 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.939134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:15.939140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:15.939201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:15.964064 1136586 cri.go:89] found id: ""
	I1208 01:55:15.964088 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.964097 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:15.964103 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:15.964167 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:15.989749 1136586 cri.go:89] found id: ""
	I1208 01:55:15.989789 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.989798 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:15.989805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:15.989864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:16.017523 1136586 cri.go:89] found id: ""
	I1208 01:55:16.017558 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.017567 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:16.017573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:16.017638 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:16.043968 1136586 cri.go:89] found id: ""
	I1208 01:55:16.043996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.044005 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:16.044012 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:16.044077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:16.068942 1136586 cri.go:89] found id: ""
	I1208 01:55:16.069012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.069038 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:16.069057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:16.069149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:16.110088 1136586 cri.go:89] found id: ""
	I1208 01:55:16.110117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.110127 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:16.110136 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:16.110147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:16.194161 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:16.194206 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:16.209083 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:16.209108 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:16.278327 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:16.278346 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:16.278361 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:16.304026 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:16.304059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:18.833542 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:18.844944 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:18.845029 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:18.871187 1136586 cri.go:89] found id: ""
	I1208 01:55:18.871210 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.871220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:18.871226 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:18.871287 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:18.899377 1136586 cri.go:89] found id: ""
	I1208 01:55:18.899399 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.899407 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:18.899413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:18.899473 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:18.924554 1136586 cri.go:89] found id: ""
	I1208 01:55:18.924578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.924587 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:18.924593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:18.924653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:18.949910 1136586 cri.go:89] found id: ""
	I1208 01:55:18.949932 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.949941 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:18.949947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:18.950008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:18.974978 1136586 cri.go:89] found id: ""
	I1208 01:55:18.975001 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.975009 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:18.975015 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:18.975074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:19.005380 1136586 cri.go:89] found id: ""
	I1208 01:55:19.005411 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.005421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:19.005429 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:19.005503 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:19.032668 1136586 cri.go:89] found id: ""
	I1208 01:55:19.032750 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.032765 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:19.032780 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:19.032843 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:19.059531 1136586 cri.go:89] found id: ""
	I1208 01:55:19.059562 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.059572 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:19.059602 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:19.059619 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:19.121579 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:19.121613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:19.138076 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:19.138103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:19.222963 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:19.222987 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:19.223000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:19.253325 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:19.253368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:19.388285 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:55:19.459805 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:19.459968 1136586 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:19.463177 1136586 out.go:179] * Enabled addons: 
	I1208 01:55:19.465938 1136586 addons.go:530] duration metric: took 1m45.131432136s for enable addons: enabled=[]
	I1208 01:55:21.781716 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:21.792431 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:21.792512 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:21.819119 1136586 cri.go:89] found id: ""
	I1208 01:55:21.819147 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.819157 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:21.819164 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:21.819230 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:21.848715 1136586 cri.go:89] found id: ""
	I1208 01:55:21.848751 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.848760 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:21.848767 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:21.848826 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:21.873926 1136586 cri.go:89] found id: ""
	I1208 01:55:21.873952 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.873961 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:21.873968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:21.874028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:21.900968 1136586 cri.go:89] found id: ""
	I1208 01:55:21.900995 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.901005 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:21.901011 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:21.901071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:21.929497 1136586 cri.go:89] found id: ""
	I1208 01:55:21.929524 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.929533 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:21.929540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:21.929600 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:21.954914 1136586 cri.go:89] found id: ""
	I1208 01:55:21.954936 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.954951 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:21.954959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:21.955020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:21.985551 1136586 cri.go:89] found id: ""
	I1208 01:55:21.985578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.985586 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:21.985593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:21.985656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:22.016148 1136586 cri.go:89] found id: ""
	I1208 01:55:22.016222 1136586 logs.go:282] 0 containers: []
	W1208 01:55:22.016244 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:22.016266 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:22.016305 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:22.049513 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:22.049585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:22.109605 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:22.109713 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:22.126061 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:22.126134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:22.225148 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:22.225170 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:22.225183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:24.750628 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:24.761806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:24.761883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:24.787831 1136586 cri.go:89] found id: ""
	I1208 01:55:24.787855 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.787864 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:24.787871 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:24.787931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:24.816489 1136586 cri.go:89] found id: ""
	I1208 01:55:24.816516 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.816526 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:24.816533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:24.816631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:24.843224 1136586 cri.go:89] found id: ""
	I1208 01:55:24.843247 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.843256 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:24.843262 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:24.843324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:24.869163 1136586 cri.go:89] found id: ""
	I1208 01:55:24.869186 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.869195 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:24.869202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:24.869261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:24.896657 1136586 cri.go:89] found id: ""
	I1208 01:55:24.896685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.896695 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:24.896701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:24.896763 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:24.924888 1136586 cri.go:89] found id: ""
	I1208 01:55:24.924918 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.924927 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:24.924934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:24.924999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:24.951093 1136586 cri.go:89] found id: ""
	I1208 01:55:24.951117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.951126 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:24.951133 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:24.951196 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:24.980609 1136586 cri.go:89] found id: ""
	I1208 01:55:24.980633 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.980642 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:24.980651 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:24.980662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:25.036369 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:25.036404 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:25.057565 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:25.057647 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:25.200105 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:25.200136 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:25.200151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:25.227358 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:25.227398 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:27.756955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:27.767899 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:27.767972 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:27.795426 1136586 cri.go:89] found id: ""
	I1208 01:55:27.795451 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.795460 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:27.795466 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:27.795529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:27.821100 1136586 cri.go:89] found id: ""
	I1208 01:55:27.821127 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.821137 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:27.821143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:27.821213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:27.851486 1136586 cri.go:89] found id: ""
	I1208 01:55:27.851509 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.851518 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:27.851524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:27.851583 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:27.881644 1136586 cri.go:89] found id: ""
	I1208 01:55:27.881665 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.881673 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:27.881681 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:27.881739 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:27.911149 1136586 cri.go:89] found id: ""
	I1208 01:55:27.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.911185 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:27.911191 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:27.911296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:27.935972 1136586 cri.go:89] found id: ""
	I1208 01:55:27.936042 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.936069 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:27.936084 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:27.936158 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:27.961735 1136586 cri.go:89] found id: ""
	I1208 01:55:27.961762 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.961772 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:27.961778 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:27.961845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:27.987428 1136586 cri.go:89] found id: ""
	I1208 01:55:27.987452 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.987461 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:27.987471 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:27.987482 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:28.018603 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:28.018646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:28.051322 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:28.051395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:28.116115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:28.116154 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:28.140270 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:28.140297 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:28.224200 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:30.725898 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:30.736353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:30.736438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:30.764621 1136586 cri.go:89] found id: ""
	I1208 01:55:30.764647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.764667 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:30.764691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:30.764772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:30.790477 1136586 cri.go:89] found id: ""
	I1208 01:55:30.790502 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.790510 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:30.790516 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:30.790577 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:30.816436 1136586 cri.go:89] found id: ""
	I1208 01:55:30.816522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.816539 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:30.816547 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:30.816625 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:30.845918 1136586 cri.go:89] found id: ""
	I1208 01:55:30.845944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.845953 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:30.845960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:30.846020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:30.870263 1136586 cri.go:89] found id: ""
	I1208 01:55:30.870307 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.870317 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:30.870323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:30.870388 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:30.896013 1136586 cri.go:89] found id: ""
	I1208 01:55:30.896041 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.896049 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:30.896057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:30.896174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:30.921585 1136586 cri.go:89] found id: ""
	I1208 01:55:30.921612 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.921621 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:30.921628 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:30.921689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:30.951330 1136586 cri.go:89] found id: ""
	I1208 01:55:30.951355 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.951365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:30.951374 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:30.951391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:30.977110 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:30.977151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:31.009469 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:31.009525 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:31.071586 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:31.071635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:31.087881 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:31.087927 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:31.188603 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:33.688896 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:33.699658 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:33.699730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:33.723918 1136586 cri.go:89] found id: ""
	I1208 01:55:33.723944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.723952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:33.723959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:33.724017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:33.748249 1136586 cri.go:89] found id: ""
	I1208 01:55:33.748272 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.748281 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:33.748287 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:33.748361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:33.774082 1136586 cri.go:89] found id: ""
	I1208 01:55:33.774165 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.774188 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:33.774208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:33.774300 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:33.804783 1136586 cri.go:89] found id: ""
	I1208 01:55:33.804808 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.804817 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:33.804824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:33.804883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:33.830537 1136586 cri.go:89] found id: ""
	I1208 01:55:33.830568 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.830578 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:33.830584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:33.830645 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:33.855676 1136586 cri.go:89] found id: ""
	I1208 01:55:33.855702 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.855711 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:33.855719 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:33.855788 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:33.881829 1136586 cri.go:89] found id: ""
	I1208 01:55:33.881907 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.881943 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:33.881968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:33.882061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:33.911849 1136586 cri.go:89] found id: ""
	I1208 01:55:33.911872 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.911880 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:33.911925 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:33.911937 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:33.939161 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:33.939188 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:33.997922 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:33.997962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:34.019097 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:34.019129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:34.086047 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:34.086070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:34.086081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:36.616392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:36.627074 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:36.627155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:36.655354 1136586 cri.go:89] found id: ""
	I1208 01:55:36.655378 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.655545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:36.655552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:36.655616 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:36.684592 1136586 cri.go:89] found id: ""
	I1208 01:55:36.684615 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.684623 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:36.684629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:36.684693 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:36.715198 1136586 cri.go:89] found id: ""
	I1208 01:55:36.715224 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.715233 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:36.715240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:36.715304 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:36.744302 1136586 cri.go:89] found id: ""
	I1208 01:55:36.744327 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.744337 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:36.744343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:36.744405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:36.769612 1136586 cri.go:89] found id: ""
	I1208 01:55:36.769637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.769646 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:36.769652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:36.769712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:36.796116 1136586 cri.go:89] found id: ""
	I1208 01:55:36.796138 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.796147 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:36.796153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:36.796212 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:36.824398 1136586 cri.go:89] found id: ""
	I1208 01:55:36.824424 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.824433 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:36.824439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:36.824543 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:36.849915 1136586 cri.go:89] found id: ""
	I1208 01:55:36.849942 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.849951 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:36.849960 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:36.849972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:36.904949 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:36.904986 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:36.919890 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:36.919919 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:36.983074 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:36.983095 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:36.983111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:37.008505 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:37.008605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.548042 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:39.558613 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:39.558684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:39.582845 1136586 cri.go:89] found id: ""
	I1208 01:55:39.582870 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.582878 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:39.582885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:39.582946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:39.607991 1136586 cri.go:89] found id: ""
	I1208 01:55:39.608016 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.608025 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:39.608032 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:39.608094 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:39.633661 1136586 cri.go:89] found id: ""
	I1208 01:55:39.633685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.633694 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:39.633701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:39.633765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:39.658962 1136586 cri.go:89] found id: ""
	I1208 01:55:39.658989 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.658998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:39.659005 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:39.659064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:39.684407 1136586 cri.go:89] found id: ""
	I1208 01:55:39.684490 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.684514 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:39.684534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:39.684622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:39.715084 1136586 cri.go:89] found id: ""
	I1208 01:55:39.715109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.715118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:39.715125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:39.715191 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:39.740328 1136586 cri.go:89] found id: ""
	I1208 01:55:39.740352 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.740361 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:39.740368 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:39.740457 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:39.771393 1136586 cri.go:89] found id: ""
	I1208 01:55:39.771420 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.771429 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:39.771438 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:39.771450 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:39.797255 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:39.797291 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.826926 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:39.826954 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:39.882889 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:39.882925 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:39.898019 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:39.898048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:39.963174 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.463393 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:42.473927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:42.474000 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:42.499722 1136586 cri.go:89] found id: ""
	I1208 01:55:42.499747 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.499757 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:42.499764 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:42.499842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:42.525555 1136586 cri.go:89] found id: ""
	I1208 01:55:42.525637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.525664 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:42.525671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:42.525745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:42.551105 1136586 cri.go:89] found id: ""
	I1208 01:55:42.551135 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.551144 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:42.551156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:42.551217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:42.576427 1136586 cri.go:89] found id: ""
	I1208 01:55:42.576500 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.576515 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:42.576522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:42.576587 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:42.606069 1136586 cri.go:89] found id: ""
	I1208 01:55:42.606102 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.606111 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:42.606118 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:42.606190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:42.631166 1136586 cri.go:89] found id: ""
	I1208 01:55:42.631193 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.631202 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:42.631208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:42.631267 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:42.655160 1136586 cri.go:89] found id: ""
	I1208 01:55:42.655238 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.655255 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:42.655266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:42.655329 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:42.680010 1136586 cri.go:89] found id: ""
	I1208 01:55:42.680085 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.680100 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:42.680111 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:42.680124 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:42.695151 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:42.695175 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:42.763022 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.763046 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:42.763059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:42.788301 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:42.788337 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:42.823956 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:42.823981 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.380090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:45.395413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:45.395485 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:45.439897 1136586 cri.go:89] found id: ""
	I1208 01:55:45.439925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.439935 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:45.439942 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:45.440007 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:45.465988 1136586 cri.go:89] found id: ""
	I1208 01:55:45.466012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.466020 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:45.466027 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:45.466099 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:45.491807 1136586 cri.go:89] found id: ""
	I1208 01:55:45.491834 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.491843 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:45.491850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:45.491913 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:45.516818 1136586 cri.go:89] found id: ""
	I1208 01:55:45.516843 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.516854 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:45.516861 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:45.516921 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:45.542497 1136586 cri.go:89] found id: ""
	I1208 01:55:45.542522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.542531 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:45.542538 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:45.542609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:45.568083 1136586 cri.go:89] found id: ""
	I1208 01:55:45.568109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.568118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:45.568125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:45.568183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:45.593517 1136586 cri.go:89] found id: ""
	I1208 01:55:45.593544 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.593554 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:45.593561 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:45.593674 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:45.618329 1136586 cri.go:89] found id: ""
	I1208 01:55:45.618356 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.618366 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:45.618375 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:45.618387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:45.682426 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:45.682475 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:45.682489 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:45.708017 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:45.708054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:45.737945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:45.737975 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.793795 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:45.793830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.309212 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:48.320148 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:48.320220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:48.367705 1136586 cri.go:89] found id: ""
	I1208 01:55:48.367730 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.367739 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:48.367745 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:48.367804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:48.421729 1136586 cri.go:89] found id: ""
	I1208 01:55:48.421754 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.421763 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:48.421769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:48.421827 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:48.447771 1136586 cri.go:89] found id: ""
	I1208 01:55:48.447795 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.447804 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:48.447810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:48.447869 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:48.473161 1136586 cri.go:89] found id: ""
	I1208 01:55:48.473187 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.473196 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:48.473203 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:48.473265 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:48.498698 1136586 cri.go:89] found id: ""
	I1208 01:55:48.498723 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.498732 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:48.498738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:48.498798 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:48.527882 1136586 cri.go:89] found id: ""
	I1208 01:55:48.527908 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.527918 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:48.527925 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:48.528028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:48.554285 1136586 cri.go:89] found id: ""
	I1208 01:55:48.554311 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.554319 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:48.554326 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:48.554385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:48.580502 1136586 cri.go:89] found id: ""
	I1208 01:55:48.580529 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.580538 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:48.580548 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:48.580580 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:48.610294 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:48.610319 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:48.665141 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:48.665179 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.682234 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:48.682262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:48.759351 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:48.759375 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:48.759387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:51.285923 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:51.298330 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:51.298405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:51.324185 1136586 cri.go:89] found id: ""
	I1208 01:55:51.324212 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.324220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:51.324227 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:51.324289 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:51.373377 1136586 cri.go:89] found id: ""
	I1208 01:55:51.373405 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.373414 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:51.373421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:51.373482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:51.433499 1136586 cri.go:89] found id: ""
	I1208 01:55:51.433522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.433531 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:51.433537 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:51.433595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:51.458517 1136586 cri.go:89] found id: ""
	I1208 01:55:51.458543 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.458552 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:51.458558 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:51.458622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:51.488348 1136586 cri.go:89] found id: ""
	I1208 01:55:51.488373 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.488382 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:51.488389 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:51.488471 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:51.513083 1136586 cri.go:89] found id: ""
	I1208 01:55:51.513109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.513119 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:51.513126 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:51.513190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:51.537741 1136586 cri.go:89] found id: ""
	I1208 01:55:51.537785 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.537804 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:51.537811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:51.537886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:51.563745 1136586 cri.go:89] found id: ""
	I1208 01:55:51.563769 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.563777 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:51.563786 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:51.563797 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:51.594103 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:51.594137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:51.650065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:51.650099 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:51.665199 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:51.665275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:51.732191 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:51.732221 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:51.732235 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.259222 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:54.271505 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:54.271585 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:54.300828 1136586 cri.go:89] found id: ""
	I1208 01:55:54.300860 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.300869 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:54.300875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:54.300944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:54.326203 1136586 cri.go:89] found id: ""
	I1208 01:55:54.326235 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.326245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:54.326251 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:54.326319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:54.392508 1136586 cri.go:89] found id: ""
	I1208 01:55:54.392537 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.392557 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:54.392564 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:54.392631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:54.443370 1136586 cri.go:89] found id: ""
	I1208 01:55:54.443403 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.443413 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:54.443419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:54.443479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:54.471931 1136586 cri.go:89] found id: ""
	I1208 01:55:54.471996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.472011 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:54.472018 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:54.472080 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:54.497863 1136586 cri.go:89] found id: ""
	I1208 01:55:54.497888 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.497897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:54.497905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:54.497966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:54.522372 1136586 cri.go:89] found id: ""
	I1208 01:55:54.522398 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.522408 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:54.522415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:54.522500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:54.549239 1136586 cri.go:89] found id: ""
	I1208 01:55:54.549266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.549275 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:54.549284 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:54.549316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:54.612864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:54.612887 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:54.612900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.639721 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:54.639758 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:54.671819 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:54.671845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:54.734691 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:54.734736 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.251176 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:57.261934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:57.262008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:57.287436 1136586 cri.go:89] found id: ""
	I1208 01:55:57.287460 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.287469 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:57.287476 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:57.287538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:57.313930 1136586 cri.go:89] found id: ""
	I1208 01:55:57.313953 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.313962 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:57.313968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:57.314028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:57.340222 1136586 cri.go:89] found id: ""
	I1208 01:55:57.340245 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.340254 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:57.340260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:57.340321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:57.380005 1136586 cri.go:89] found id: ""
	I1208 01:55:57.380028 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.380037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:57.380044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:57.380111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:57.421841 1136586 cri.go:89] found id: ""
	I1208 01:55:57.421863 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.421871 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:57.421877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:57.421935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:57.456549 1136586 cri.go:89] found id: ""
	I1208 01:55:57.456579 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.456588 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:57.456594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:57.456656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:57.480374 1136586 cri.go:89] found id: ""
	I1208 01:55:57.480472 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.480487 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:57.480494 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:57.480567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:57.504897 1136586 cri.go:89] found id: ""
	I1208 01:55:57.504925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.504935 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:57.504944 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:57.504955 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:57.530334 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:57.530377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:57.561764 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:57.561791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:57.620753 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:57.620788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.636064 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:57.636155 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:57.701326 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.203093 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:00.255847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:00.255935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:00.303978 1136586 cri.go:89] found id: ""
	I1208 01:56:00.304070 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.304095 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:00.304117 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:00.304214 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:00.413194 1136586 cri.go:89] found id: ""
	I1208 01:56:00.413283 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.413307 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:00.413328 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:00.413451 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:00.536345 1136586 cri.go:89] found id: ""
	I1208 01:56:00.536426 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.536462 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:00.536495 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:00.536582 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:00.570659 1136586 cri.go:89] found id: ""
	I1208 01:56:00.570746 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.570873 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:00.570915 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:00.571047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:00.600506 1136586 cri.go:89] found id: ""
	I1208 01:56:00.600542 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.600552 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:00.600559 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:00.600627 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:00.628998 1136586 cri.go:89] found id: ""
	I1208 01:56:00.629028 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.629037 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:00.629045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:00.629113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:00.655017 1136586 cri.go:89] found id: ""
	I1208 01:56:00.655055 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.655066 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:00.655073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:00.655136 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:00.687531 1136586 cri.go:89] found id: ""
	I1208 01:56:00.687555 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.687589 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:00.687601 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:00.687621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:00.716787 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:00.716826 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:00.773133 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:00.773171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:00.788167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:00.788194 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:00.851515 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.851539 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:00.851553 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.378410 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:03.388811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:03.388882 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:03.416483 1136586 cri.go:89] found id: ""
	I1208 01:56:03.416508 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.416517 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:03.416523 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:03.416584 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:03.444854 1136586 cri.go:89] found id: ""
	I1208 01:56:03.444879 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.444889 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:03.444896 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:03.444957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:03.471069 1136586 cri.go:89] found id: ""
	I1208 01:56:03.471096 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.471106 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:03.471113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:03.471174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:03.497488 1136586 cri.go:89] found id: ""
	I1208 01:56:03.497516 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.497525 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:03.497532 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:03.497592 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:03.523459 1136586 cri.go:89] found id: ""
	I1208 01:56:03.523485 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.523494 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:03.523501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:03.523564 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:03.553004 1136586 cri.go:89] found id: ""
	I1208 01:56:03.553030 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.553038 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:03.553045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:03.553104 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:03.582299 1136586 cri.go:89] found id: ""
	I1208 01:56:03.582325 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.582334 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:03.582340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:03.582398 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:03.628970 1136586 cri.go:89] found id: ""
	I1208 01:56:03.629036 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.629057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:03.629078 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:03.629116 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:03.693550 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:03.693861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:03.725106 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:03.725132 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:03.797949 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:03.797973 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:03.797985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.822975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:03.823012 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:06.351834 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:06.362738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:06.362832 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:06.388195 1136586 cri.go:89] found id: ""
	I1208 01:56:06.388222 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.388231 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:06.388238 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:06.388305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:06.413430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.413536 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.413559 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:06.413580 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:06.413657 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:06.438706 1136586 cri.go:89] found id: ""
	I1208 01:56:06.438770 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.438794 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:06.438813 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:06.438893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:06.463796 1136586 cri.go:89] found id: ""
	I1208 01:56:06.463860 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.463883 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:06.463902 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:06.463979 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:06.493653 1136586 cri.go:89] found id: ""
	I1208 01:56:06.493719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.493743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:06.493761 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:06.493839 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:06.518393 1136586 cri.go:89] found id: ""
	I1208 01:56:06.518490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.518516 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:06.518540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:06.518628 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:06.547357 1136586 cri.go:89] found id: ""
	I1208 01:56:06.547423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.547444 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:06.547464 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:06.547537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:06.572430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.572460 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.572469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:06.572479 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:06.572520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:06.631771 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:06.631805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:06.648910 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:06.648992 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:06.719373 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:06.719447 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:06.719474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:06.744508 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:06.744540 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:09.275604 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:09.286432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:09.286521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:09.312708 1136586 cri.go:89] found id: ""
	I1208 01:56:09.312733 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.312742 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:09.312749 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:09.312809 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:09.341427 1136586 cri.go:89] found id: ""
	I1208 01:56:09.341452 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.341461 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:09.341468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:09.341533 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:09.364765 1136586 cri.go:89] found id: ""
	I1208 01:56:09.364791 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.364801 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:09.364808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:09.364871 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:09.390922 1136586 cri.go:89] found id: ""
	I1208 01:56:09.390950 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.390959 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:09.390965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:09.391027 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:09.415255 1136586 cri.go:89] found id: ""
	I1208 01:56:09.415279 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.415288 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:09.415294 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:09.415351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:09.443874 1136586 cri.go:89] found id: ""
	I1208 01:56:09.443898 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.443907 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:09.443913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:09.443973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:09.473821 1136586 cri.go:89] found id: ""
	I1208 01:56:09.473846 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.473855 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:09.473862 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:09.473920 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:09.502023 1136586 cri.go:89] found id: ""
	I1208 01:56:09.502048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.502057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:09.502066 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:09.502077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:09.557585 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:09.557621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:09.572644 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:09.572673 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:09.660866 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:09.660889 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:09.660902 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:09.687200 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:09.687238 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:12.215648 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:12.227315 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:12.227391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:12.254343 1136586 cri.go:89] found id: ""
	I1208 01:56:12.254369 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.254378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:12.254385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:12.254467 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:12.279481 1136586 cri.go:89] found id: ""
	I1208 01:56:12.279550 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.279574 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:12.279594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:12.279683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:12.305844 1136586 cri.go:89] found id: ""
	I1208 01:56:12.305910 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.305933 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:12.305951 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:12.306041 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:12.330060 1136586 cri.go:89] found id: ""
	I1208 01:56:12.330139 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.330162 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:12.330181 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:12.330273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:12.356745 1136586 cri.go:89] found id: ""
	I1208 01:56:12.356813 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.356840 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:12.356858 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:12.356943 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:12.386368 1136586 cri.go:89] found id: ""
	I1208 01:56:12.386475 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.386492 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:12.386500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:12.386563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:12.412659 1136586 cri.go:89] found id: ""
	I1208 01:56:12.412685 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.412694 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:12.412700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:12.412779 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:12.440569 1136586 cri.go:89] found id: ""
	I1208 01:56:12.440596 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.440604 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:12.440615 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:12.440626 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:12.496637 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:12.496674 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:12.511594 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:12.511624 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:12.580748 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:12.580771 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:12.580784 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:12.613723 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:12.613802 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.152673 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:15.163614 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:15.163688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:15.192414 1136586 cri.go:89] found id: ""
	I1208 01:56:15.192449 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.192458 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:15.192465 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:15.192537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:15.219157 1136586 cri.go:89] found id: ""
	I1208 01:56:15.219182 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.219191 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:15.219198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:15.219258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:15.244756 1136586 cri.go:89] found id: ""
	I1208 01:56:15.244824 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.244839 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:15.244846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:15.244907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:15.271473 1136586 cri.go:89] found id: ""
	I1208 01:56:15.271546 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.271562 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:15.271569 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:15.271637 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:15.297385 1136586 cri.go:89] found id: ""
	I1208 01:56:15.297411 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.297430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:15.297437 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:15.297506 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:15.323057 1136586 cri.go:89] found id: ""
	I1208 01:56:15.323127 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.323149 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:15.323158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:15.323226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:15.348696 1136586 cri.go:89] found id: ""
	I1208 01:56:15.348771 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.348788 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:15.348795 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:15.348857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:15.373461 1136586 cri.go:89] found id: ""
	I1208 01:56:15.373483 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.373491 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:15.373500 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:15.373512 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.403816 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:15.403845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:15.463833 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:15.463875 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:15.479494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:15.479522 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:15.551161 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:15.551185 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:15.551199 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.077116 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:18.087881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:18.087956 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:18.116452 1136586 cri.go:89] found id: ""
	I1208 01:56:18.116480 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.116490 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:18.116497 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:18.116558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:18.147311 1136586 cri.go:89] found id: ""
	I1208 01:56:18.147339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.147347 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:18.147353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:18.147415 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:18.173654 1136586 cri.go:89] found id: ""
	I1208 01:56:18.173680 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.173689 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:18.173695 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:18.173754 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:18.198118 1136586 cri.go:89] found id: ""
	I1208 01:56:18.198142 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.198151 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:18.198158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:18.198220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:18.229347 1136586 cri.go:89] found id: ""
	I1208 01:56:18.229371 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.229379 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:18.229385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:18.229443 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:18.253505 1136586 cri.go:89] found id: ""
	I1208 01:56:18.253528 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.253536 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:18.253542 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:18.253601 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:18.279471 1136586 cri.go:89] found id: ""
	I1208 01:56:18.279496 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.279506 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:18.279513 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:18.279571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:18.309796 1136586 cri.go:89] found id: ""
	I1208 01:56:18.309819 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.309827 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:18.309839 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:18.309850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:18.366744 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:18.366779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:18.381719 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:18.381749 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:18.448045 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:18.448070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:18.448082 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.473293 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:18.473332 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.004404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:21.017333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:21.017424 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:21.042756 1136586 cri.go:89] found id: ""
	I1208 01:56:21.042823 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.042839 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:21.042847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:21.042907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:21.068017 1136586 cri.go:89] found id: ""
	I1208 01:56:21.068042 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.068051 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:21.068057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:21.068134 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:21.095695 1136586 cri.go:89] found id: ""
	I1208 01:56:21.095719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.095729 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:21.095735 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:21.095833 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:21.126473 1136586 cri.go:89] found id: ""
	I1208 01:56:21.126499 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.126508 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:21.126515 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:21.126578 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:21.159320 1136586 cri.go:89] found id: ""
	I1208 01:56:21.159344 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.159354 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:21.159360 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:21.159421 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:21.189716 1136586 cri.go:89] found id: ""
	I1208 01:56:21.189740 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.189790 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:21.189808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:21.189875 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:21.215065 1136586 cri.go:89] found id: ""
	I1208 01:56:21.215090 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.215099 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:21.215105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:21.215186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:21.239527 1136586 cri.go:89] found id: ""
	I1208 01:56:21.239551 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.239559 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:21.239568 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:21.239581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:21.303585 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:21.303607 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:21.303622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:21.329232 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:21.329269 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.357399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:21.357429 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:21.413905 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:21.413941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:23.930606 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:23.941524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:23.941609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:23.969400 1136586 cri.go:89] found id: ""
	I1208 01:56:23.969431 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.969441 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:23.969447 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:23.969510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:23.999105 1136586 cri.go:89] found id: ""
	I1208 01:56:23.999131 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.999140 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:23.999147 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:23.999216 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:24.031489 1136586 cri.go:89] found id: ""
	I1208 01:56:24.031517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.031527 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:24.031533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:24.031598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:24.057876 1136586 cri.go:89] found id: ""
	I1208 01:56:24.057902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.057911 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:24.057917 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:24.057978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:24.092220 1136586 cri.go:89] found id: ""
	I1208 01:56:24.092247 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.092257 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:24.092263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:24.092324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:24.125261 1136586 cri.go:89] found id: ""
	I1208 01:56:24.125289 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.125298 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:24.125306 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:24.125367 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:24.153744 1136586 cri.go:89] found id: ""
	I1208 01:56:24.153772 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.153782 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:24.153789 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:24.153852 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:24.179839 1136586 cri.go:89] found id: ""
	I1208 01:56:24.179866 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.179875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:24.179884 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:24.179916 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:24.237479 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:24.237514 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:24.252654 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:24.252693 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:24.325211 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:24.325232 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:24.325244 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:24.351049 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:24.351084 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:26.879645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:26.891936 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:26.892009 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:26.916974 1136586 cri.go:89] found id: ""
	I1208 01:56:26.916998 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.917007 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:26.917013 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:26.917072 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:26.941861 1136586 cri.go:89] found id: ""
	I1208 01:56:26.941885 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.941894 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:26.941900 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:26.941963 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:26.974560 1136586 cri.go:89] found id: ""
	I1208 01:56:26.974587 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.974596 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:26.974602 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:26.974663 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:26.999892 1136586 cri.go:89] found id: ""
	I1208 01:56:26.999921 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.999930 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:26.999937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:27.000021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:27.030397 1136586 cri.go:89] found id: ""
	I1208 01:56:27.030421 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.030430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:27.030436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:27.030521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:27.059896 1136586 cri.go:89] found id: ""
	I1208 01:56:27.059923 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.059932 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:27.059941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:27.059999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:27.084629 1136586 cri.go:89] found id: ""
	I1208 01:56:27.084656 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.084665 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:27.084671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:27.084733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:27.119162 1136586 cri.go:89] found id: ""
	I1208 01:56:27.119185 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.119193 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:27.119202 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:27.119213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:27.179450 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:27.179487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:27.194459 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:27.194486 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:27.261775 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:27.261797 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:27.261810 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:27.287303 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:27.287338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:29.820302 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:29.830851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:29.830917 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:29.869685 1136586 cri.go:89] found id: ""
	I1208 01:56:29.869717 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.869726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:29.869733 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:29.869789 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:29.904021 1136586 cri.go:89] found id: ""
	I1208 01:56:29.904048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.904057 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:29.904063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:29.904122 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:29.929826 1136586 cri.go:89] found id: ""
	I1208 01:56:29.929854 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.929864 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:29.929870 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:29.929935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:29.954915 1136586 cri.go:89] found id: ""
	I1208 01:56:29.954939 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.954947 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:29.954954 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:29.955013 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:29.980194 1136586 cri.go:89] found id: ""
	I1208 01:56:29.980218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.980227 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:29.980233 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:29.980296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:30.034520 1136586 cri.go:89] found id: ""
	I1208 01:56:30.034556 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.034566 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:30.034573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:30.034648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:30.069395 1136586 cri.go:89] found id: ""
	I1208 01:56:30.069422 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.069432 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:30.069439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:30.069507 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:30.109430 1136586 cri.go:89] found id: ""
	I1208 01:56:30.109459 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.109469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:30.109479 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:30.109491 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:30.146595 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:30.146631 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:30.206376 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:30.206419 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:30.225510 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:30.225621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:30.296464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:30.296484 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:30.296497 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:32.823121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:32.833454 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:32.833529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:32.868695 1136586 cri.go:89] found id: ""
	I1208 01:56:32.868721 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.868740 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:32.868747 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:32.868821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:32.906232 1136586 cri.go:89] found id: ""
	I1208 01:56:32.906253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.906261 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:32.906267 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:32.906327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:32.932154 1136586 cri.go:89] found id: ""
	I1208 01:56:32.932181 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.932190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:32.932200 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:32.932262 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:32.957782 1136586 cri.go:89] found id: ""
	I1208 01:56:32.957805 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.957814 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:32.957821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:32.957886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:32.983951 1136586 cri.go:89] found id: ""
	I1208 01:56:32.983978 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.983988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:32.983995 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:32.984057 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:33.011290 1136586 cri.go:89] found id: ""
	I1208 01:56:33.011316 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.011325 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:33.011340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:33.011410 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:33.038703 1136586 cri.go:89] found id: ""
	I1208 01:56:33.038726 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.038735 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:33.038741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:33.038799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:33.063041 1136586 cri.go:89] found id: ""
	I1208 01:56:33.063065 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.063074 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:33.063084 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:33.063115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:33.078006 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:33.078036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:33.170567 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:33.170591 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:33.170607 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:33.196077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:33.196111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:33.227121 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:33.227152 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:35.783290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:35.793700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:35.793778 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:35.821903 1136586 cri.go:89] found id: ""
	I1208 01:56:35.821937 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.821946 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:35.821953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:35.822014 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:35.854878 1136586 cri.go:89] found id: ""
	I1208 01:56:35.854902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.854910 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:35.854916 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:35.854978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:35.881395 1136586 cri.go:89] found id: ""
	I1208 01:56:35.881418 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.881426 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:35.881432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:35.881490 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:35.910658 1136586 cri.go:89] found id: ""
	I1208 01:56:35.910679 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.910688 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:35.910694 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:35.910753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:35.939089 1136586 cri.go:89] found id: ""
	I1208 01:56:35.939114 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.939129 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:35.939137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:35.939199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:35.964135 1136586 cri.go:89] found id: ""
	I1208 01:56:35.964158 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.964166 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:35.964173 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:35.964235 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:35.990669 1136586 cri.go:89] found id: ""
	I1208 01:56:35.990692 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.990701 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:35.990707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:35.990770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:36.020165 1136586 cri.go:89] found id: ""
	I1208 01:56:36.020191 1136586 logs.go:282] 0 containers: []
	W1208 01:56:36.020207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:36.020217 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:36.020228 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:36.076411 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:36.076452 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:36.093602 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:36.093683 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:36.181516 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:36.181540 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:36.181552 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:36.207107 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:36.207142 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:38.735690 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:38.746691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:38.746767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:38.773309 1136586 cri.go:89] found id: ""
	I1208 01:56:38.773339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.773349 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:38.773356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:38.773423 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:38.801208 1136586 cri.go:89] found id: ""
	I1208 01:56:38.801235 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.801245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:38.801254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:38.801317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:38.826539 1136586 cri.go:89] found id: ""
	I1208 01:56:38.826566 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.826575 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:38.826582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:38.826642 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:38.865488 1136586 cri.go:89] found id: ""
	I1208 01:56:38.865517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.865527 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:38.865533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:38.865594 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:38.900627 1136586 cri.go:89] found id: ""
	I1208 01:56:38.900655 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.900664 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:38.900670 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:38.900733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:38.927847 1136586 cri.go:89] found id: ""
	I1208 01:56:38.927871 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.927880 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:38.927887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:38.927949 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:38.952594 1136586 cri.go:89] found id: ""
	I1208 01:56:38.952666 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.952689 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:38.952714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:38.952803 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:38.978089 1136586 cri.go:89] found id: ""
	I1208 01:56:38.978116 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.978125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:38.978134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:38.978147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:39.047378 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:39.047401 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:39.047414 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:39.073359 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:39.073402 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:39.112761 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:39.112796 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:39.176177 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:39.176214 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:41.692238 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:41.702585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:41.702656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:41.726879 1136586 cri.go:89] found id: ""
	I1208 01:56:41.726913 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.726923 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:41.726930 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:41.726996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:41.752119 1136586 cri.go:89] found id: ""
	I1208 01:56:41.752143 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.752152 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:41.752158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:41.752215 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:41.777446 1136586 cri.go:89] found id: ""
	I1208 01:56:41.777473 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.777482 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:41.777488 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:41.777548 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:41.804077 1136586 cri.go:89] found id: ""
	I1208 01:56:41.804103 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.804112 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:41.804119 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:41.804179 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:41.828883 1136586 cri.go:89] found id: ""
	I1208 01:56:41.828908 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.828917 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:41.828924 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:41.828987 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:41.875100 1136586 cri.go:89] found id: ""
	I1208 01:56:41.875128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.875138 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:41.875145 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:41.875204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:41.907099 1136586 cri.go:89] found id: ""
	I1208 01:56:41.907126 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.907136 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:41.907142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:41.907201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:41.936702 1136586 cri.go:89] found id: ""
	I1208 01:56:41.936729 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.936738 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:41.936748 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:41.936780 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:41.992993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:41.993029 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:42.008895 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:42.008988 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:42.090561 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:42.090592 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:42.090605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:42.127950 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:42.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:44.678288 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:44.690356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:44.690429 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:44.716072 1136586 cri.go:89] found id: ""
	I1208 01:56:44.716095 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.716105 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:44.716111 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:44.716173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:44.742318 1136586 cri.go:89] found id: ""
	I1208 01:56:44.742347 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.742357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:44.742363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:44.742428 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:44.768786 1136586 cri.go:89] found id: ""
	I1208 01:56:44.768814 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.768824 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:44.768830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:44.768892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:44.794997 1136586 cri.go:89] found id: ""
	I1208 01:56:44.795020 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.795028 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:44.795035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:44.795093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:44.824626 1136586 cri.go:89] found id: ""
	I1208 01:56:44.824693 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.824719 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:44.824738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:44.824823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:44.854631 1136586 cri.go:89] found id: ""
	I1208 01:56:44.854660 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.854682 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:44.854707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:44.854790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:44.886832 1136586 cri.go:89] found id: ""
	I1208 01:56:44.886853 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.886862 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:44.886868 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:44.886931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:44.918383 1136586 cri.go:89] found id: ""
	I1208 01:56:44.918409 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.918420 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:44.918430 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:44.918441 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:44.974124 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:44.974160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:44.989499 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:44.989581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:45.183353 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:45.183384 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:45.183415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:45.225041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:45.225130 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:47.776374 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:47.786874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:47.786944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:47.817071 1136586 cri.go:89] found id: ""
	I1208 01:56:47.817097 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.817106 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:47.817113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:47.817173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:47.848935 1136586 cri.go:89] found id: ""
	I1208 01:56:47.848964 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.848972 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:47.848978 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:47.849039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:47.879145 1136586 cri.go:89] found id: ""
	I1208 01:56:47.879175 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.879190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:47.879196 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:47.879255 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:47.919571 1136586 cri.go:89] found id: ""
	I1208 01:56:47.919595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.919605 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:47.919612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:47.919678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:47.945072 1136586 cri.go:89] found id: ""
	I1208 01:56:47.945098 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.945107 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:47.945113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:47.945176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:47.972399 1136586 cri.go:89] found id: ""
	I1208 01:56:47.972423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.972432 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:47.972446 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:47.972513 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:47.998198 1136586 cri.go:89] found id: ""
	I1208 01:56:47.998225 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.998234 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:47.998240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:47.998357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:48.026417 1136586 cri.go:89] found id: ""
	I1208 01:56:48.026469 1136586 logs.go:282] 0 containers: []
	W1208 01:56:48.026480 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:48.026514 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:48.026534 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:48.083726 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:48.083765 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:48.102473 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:48.102503 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:48.195413 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:48.195448 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:48.195461 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:48.222088 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:48.222125 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:50.752185 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.763217 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:50.763296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:50.792851 1136586 cri.go:89] found id: ""
	I1208 01:56:50.792877 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.792886 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:50.792893 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:50.792952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:50.818544 1136586 cri.go:89] found id: ""
	I1208 01:56:50.818573 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.818582 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:50.818590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:50.818653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:50.856256 1136586 cri.go:89] found id: ""
	I1208 01:56:50.856286 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.856296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:50.856303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:50.856365 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:50.890254 1136586 cri.go:89] found id: ""
	I1208 01:56:50.890277 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.890286 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:50.890292 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:50.890351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:50.919013 1136586 cri.go:89] found id: ""
	I1208 01:56:50.919039 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.919048 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:50.919054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:50.919115 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:50.943865 1136586 cri.go:89] found id: ""
	I1208 01:56:50.943888 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.943897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:50.943903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:50.943968 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:50.967885 1136586 cri.go:89] found id: ""
	I1208 01:56:50.967912 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.967921 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:50.967927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:50.967984 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:50.997744 1136586 cri.go:89] found id: ""
	I1208 01:56:50.997779 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.997788 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:50.997854 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:50.997874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:51.066108 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:51.066131 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:51.066144 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:51.092098 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:51.092134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:51.129363 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:51.129392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:51.192049 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:51.192086 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:53.707235 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.718177 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:53.718245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:53.743649 1136586 cri.go:89] found id: ""
	I1208 01:56:53.743674 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.743684 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:53.743690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:53.743755 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:53.769475 1136586 cri.go:89] found id: ""
	I1208 01:56:53.769503 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.769512 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:53.769519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:53.769581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:53.795104 1136586 cri.go:89] found id: ""
	I1208 01:56:53.795128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.795137 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:53.795143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:53.795219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:53.824300 1136586 cri.go:89] found id: ""
	I1208 01:56:53.824322 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.824335 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:53.824342 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:53.824403 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:53.858957 1136586 cri.go:89] found id: ""
	I1208 01:56:53.858984 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.858993 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:53.858999 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:53.859059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:53.889936 1136586 cri.go:89] found id: ""
	I1208 01:56:53.889958 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.889967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:53.889974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:53.890042 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:53.917197 1136586 cri.go:89] found id: ""
	I1208 01:56:53.917221 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.917230 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:53.917236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:53.917301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:53.944246 1136586 cri.go:89] found id: ""
	I1208 01:56:53.944313 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.944340 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:53.944364 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:53.944395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:54.000224 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:54.000263 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:54.018576 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:54.018610 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:54.091957 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:54.092037 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:54.092064 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:54.121226 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:54.121262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.665113 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.675727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:56.675793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:56.702486 1136586 cri.go:89] found id: ""
	I1208 01:56:56.702512 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.702521 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:56.702536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:56.702595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:56.727464 1136586 cri.go:89] found id: ""
	I1208 01:56:56.727490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.727499 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:56.727506 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:56.727574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:56.755210 1136586 cri.go:89] found id: ""
	I1208 01:56:56.755242 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.755252 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:56.755259 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:56.755317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:56.780366 1136586 cri.go:89] found id: ""
	I1208 01:56:56.780394 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.780403 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:56.780409 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:56.780502 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:56.805514 1136586 cri.go:89] found id: ""
	I1208 01:56:56.805541 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.805551 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:56.805557 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:56.805615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:56.830960 1136586 cri.go:89] found id: ""
	I1208 01:56:56.830985 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.830994 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:56.831001 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:56.831067 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:56.877742 1136586 cri.go:89] found id: ""
	I1208 01:56:56.877812 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.877847 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:56.877873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:56.877969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:56.909088 1136586 cri.go:89] found id: ""
	I1208 01:56:56.909173 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.909197 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:56.909218 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:56.909261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:56.937087 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:56.937122 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.964566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:56.964593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:57.025871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:57.025917 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:57.041167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:57.041200 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:57.113620 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:59.615300 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.625998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:59.626071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:59.651013 1136586 cri.go:89] found id: ""
	I1208 01:56:59.651040 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.651050 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:59.651058 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:59.651140 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:59.676526 1136586 cri.go:89] found id: ""
	I1208 01:56:59.676595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.676619 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:59.676632 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:59.676706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:59.705956 1136586 cri.go:89] found id: ""
	I1208 01:56:59.705982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.705992 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:59.705998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:59.706058 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:59.732960 1136586 cri.go:89] found id: ""
	I1208 01:56:59.732988 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.732998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:59.733004 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:59.733064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:59.761227 1136586 cri.go:89] found id: ""
	I1208 01:56:59.761253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.761262 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:59.761268 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:59.761332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:59.795189 1136586 cri.go:89] found id: ""
	I1208 01:56:59.795218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.795227 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:59.795235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:59.795296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:59.820209 1136586 cri.go:89] found id: ""
	I1208 01:56:59.820278 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.820303 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:59.820317 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:59.820397 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:59.854906 1136586 cri.go:89] found id: ""
	I1208 01:56:59.854982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.855003 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:59.855031 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:59.855075 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:59.895804 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:59.895880 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:59.953038 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:59.953076 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:59.968348 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:59.968383 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:00.183275 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:00.183303 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:00.183318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:02.767941 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.778692 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:02.778767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:02.804099 1136586 cri.go:89] found id: ""
	I1208 01:57:02.804168 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.804192 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:02.804207 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:02.804282 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:02.829415 1136586 cri.go:89] found id: ""
	I1208 01:57:02.829442 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.829451 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:02.829456 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:02.829516 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:02.876418 1136586 cri.go:89] found id: ""
	I1208 01:57:02.876448 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.876456 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:02.876462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:02.876521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:02.908999 1136586 cri.go:89] found id: ""
	I1208 01:57:02.909021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.909030 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:02.909036 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:02.909095 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:02.935740 1136586 cri.go:89] found id: ""
	I1208 01:57:02.935763 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.935772 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:02.935781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:02.935845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:02.962615 1136586 cri.go:89] found id: ""
	I1208 01:57:02.962640 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.962649 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:02.962676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:02.962762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:02.988338 1136586 cri.go:89] found id: ""
	I1208 01:57:02.988413 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.988447 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:02.988469 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:02.988563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:03.016087 1136586 cri.go:89] found id: ""
	I1208 01:57:03.016115 1136586 logs.go:282] 0 containers: []
	W1208 01:57:03.016125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:03.016135 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:03.016147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:03.045768 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:03.045798 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:03.103820 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:03.103856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:03.119506 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:03.119544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:03.188553 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:03.188577 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:03.188591 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.714622 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.728070 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:05.728144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:05.752683 1136586 cri.go:89] found id: ""
	I1208 01:57:05.752709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.752718 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:05.752725 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:05.752804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:05.777888 1136586 cri.go:89] found id: ""
	I1208 01:57:05.777926 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.777935 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:05.777941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:05.778004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:05.803200 1136586 cri.go:89] found id: ""
	I1208 01:57:05.803227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.803236 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:05.803243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:05.803305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:05.828694 1136586 cri.go:89] found id: ""
	I1208 01:57:05.828719 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.828728 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:05.828734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:05.828795 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:05.871706 1136586 cri.go:89] found id: ""
	I1208 01:57:05.871734 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.871743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:05.871750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:05.871810 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:05.910109 1136586 cri.go:89] found id: ""
	I1208 01:57:05.910130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.910139 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:05.910146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:05.910211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:05.935420 1136586 cri.go:89] found id: ""
	I1208 01:57:05.935446 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.935455 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:05.935463 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:05.935524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:05.964805 1136586 cri.go:89] found id: ""
	I1208 01:57:05.964830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.964840 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:05.964850 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:05.964861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.991812 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:05.991850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:06.023289 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:06.023318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:06.079947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:06.079984 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:06.094973 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:06.095001 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:06.164494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:08.664783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.675873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:08.675951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:08.701544 1136586 cri.go:89] found id: ""
	I1208 01:57:08.701570 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.701579 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:08.701585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:08.701644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:08.726739 1136586 cri.go:89] found id: ""
	I1208 01:57:08.726761 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.726770 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:08.726777 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:08.726834 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:08.752551 1136586 cri.go:89] found id: ""
	I1208 01:57:08.752579 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.752590 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:08.752596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:08.752661 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:08.785394 1136586 cri.go:89] found id: ""
	I1208 01:57:08.785418 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.785427 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:08.785434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:08.785494 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:08.809379 1136586 cri.go:89] found id: ""
	I1208 01:57:08.809411 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.809420 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:08.809426 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:08.809493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:08.834793 1136586 cri.go:89] found id: ""
	I1208 01:57:08.834820 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.834829 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:08.834836 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:08.834895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:08.871040 1136586 cri.go:89] found id: ""
	I1208 01:57:08.871067 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.871077 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:08.871083 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:08.871149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:08.898916 1136586 cri.go:89] found id: ""
	I1208 01:57:08.898943 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.898953 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:08.898961 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:08.898973 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:08.958751 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:08.958791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:08.975804 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:08.975842 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:09.045728 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:09.045754 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:09.045768 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:09.071802 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:09.071844 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:11.602631 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.621366 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:11.621447 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:11.654343 1136586 cri.go:89] found id: ""
	I1208 01:57:11.654378 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.654387 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:11.654396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:11.654496 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:11.687384 1136586 cri.go:89] found id: ""
	I1208 01:57:11.687421 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.687431 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:11.687444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:11.687515 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:11.716671 1136586 cri.go:89] found id: ""
	I1208 01:57:11.716709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.716720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:11.716726 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:11.716796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:11.742357 1136586 cri.go:89] found id: ""
	I1208 01:57:11.742391 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.742400 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:11.742407 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:11.742493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:11.768963 1136586 cri.go:89] found id: ""
	I1208 01:57:11.768990 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.768999 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:11.769006 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:11.769075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:11.793322 1136586 cri.go:89] found id: ""
	I1208 01:57:11.793354 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.793364 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:11.793371 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:11.793438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:11.819428 1136586 cri.go:89] found id: ""
	I1208 01:57:11.819473 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.819483 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:11.819490 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:11.819561 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:11.856579 1136586 cri.go:89] found id: ""
	I1208 01:57:11.856620 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.856629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:11.856639 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:11.856650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:11.920066 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:11.920104 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:11.936490 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:11.936579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:12.003301 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:12.003353 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:12.003368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:12.034123 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:12.034162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:14.566675 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.577850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:14.577926 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:14.614645 1136586 cri.go:89] found id: ""
	I1208 01:57:14.614674 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.614683 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:14.614689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:14.614746 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:14.653668 1136586 cri.go:89] found id: ""
	I1208 01:57:14.653689 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.653698 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:14.653704 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:14.653760 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:14.683123 1136586 cri.go:89] found id: ""
	I1208 01:57:14.683147 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.683155 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:14.683162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:14.683220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:14.712290 1136586 cri.go:89] found id: ""
	I1208 01:57:14.712317 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.712326 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:14.712333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:14.712411 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:14.741728 1136586 cri.go:89] found id: ""
	I1208 01:57:14.741752 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.741761 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:14.741768 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:14.741830 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:14.766640 1136586 cri.go:89] found id: ""
	I1208 01:57:14.766675 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.766684 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:14.766690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:14.766749 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:14.795809 1136586 cri.go:89] found id: ""
	I1208 01:57:14.795833 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.795843 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:14.795850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:14.795908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:14.824523 1136586 cri.go:89] found id: ""
	I1208 01:57:14.824546 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.824555 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:14.824564 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:14.824579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:14.883992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:14.884032 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:14.899927 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:14.899958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:14.971584 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:14.971605 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:14.971618 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:14.997478 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:14.997516 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.562433 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.573169 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:17.573243 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:17.604838 1136586 cri.go:89] found id: ""
	I1208 01:57:17.604866 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.604879 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:17.604885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:17.604945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:17.651166 1136586 cri.go:89] found id: ""
	I1208 01:57:17.651193 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.651202 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:17.651208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:17.651275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:17.679266 1136586 cri.go:89] found id: ""
	I1208 01:57:17.679302 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.679312 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:17.679318 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:17.679379 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:17.703476 1136586 cri.go:89] found id: ""
	I1208 01:57:17.703504 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.703513 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:17.703519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:17.703579 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:17.732349 1136586 cri.go:89] found id: ""
	I1208 01:57:17.732377 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.732386 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:17.732393 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:17.732461 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:17.761008 1136586 cri.go:89] found id: ""
	I1208 01:57:17.761033 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.761042 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:17.761053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:17.761112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:17.789502 1136586 cri.go:89] found id: ""
	I1208 01:57:17.789527 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.789536 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:17.789543 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:17.789599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:17.814915 1136586 cri.go:89] found id: ""
	I1208 01:57:17.814938 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.814947 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:17.814958 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:17.814971 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:17.901464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:17.901483 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:17.901496 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:17.927699 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:17.927737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.956480 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:17.956506 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:18.016061 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:18.016103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.532462 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.543127 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:20.543203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:20.568124 1136586 cri.go:89] found id: ""
	I1208 01:57:20.568149 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.568158 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:20.568167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:20.568227 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:20.603985 1136586 cri.go:89] found id: ""
	I1208 01:57:20.604021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.604030 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:20.604037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:20.604106 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:20.636556 1136586 cri.go:89] found id: ""
	I1208 01:57:20.636588 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.636597 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:20.636603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:20.636671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:20.672751 1136586 cri.go:89] found id: ""
	I1208 01:57:20.672825 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.672860 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:20.672885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:20.672980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:20.701486 1136586 cri.go:89] found id: ""
	I1208 01:57:20.701557 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.701593 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:20.701617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:20.701708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:20.727838 1136586 cri.go:89] found id: ""
	I1208 01:57:20.727863 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.727873 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:20.727897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:20.727958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:20.757101 1136586 cri.go:89] found id: ""
	I1208 01:57:20.757126 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.757135 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:20.757142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:20.757204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:20.786936 1136586 cri.go:89] found id: ""
	I1208 01:57:20.786961 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.786970 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:20.786981 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:20.786995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.801478 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:20.801508 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:20.873983 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:20.874054 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:20.874087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:20.901450 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:20.901529 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:20.934263 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:20.934288 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.489851 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.500424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:23.500500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:23.526190 1136586 cri.go:89] found id: ""
	I1208 01:57:23.526216 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.526225 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:23.526232 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:23.526294 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:23.552764 1136586 cri.go:89] found id: ""
	I1208 01:57:23.552790 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.552799 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:23.552806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:23.552868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:23.577380 1136586 cri.go:89] found id: ""
	I1208 01:57:23.577406 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.577414 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:23.577421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:23.577481 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:23.608802 1136586 cri.go:89] found id: ""
	I1208 01:57:23.608830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.608839 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:23.608846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:23.608910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:23.634994 1136586 cri.go:89] found id: ""
	I1208 01:57:23.635020 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.635029 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:23.635035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:23.635096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:23.663236 1136586 cri.go:89] found id: ""
	I1208 01:57:23.663261 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.663270 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:23.663277 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:23.663350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:23.688872 1136586 cri.go:89] found id: ""
	I1208 01:57:23.688898 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.688907 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:23.688914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:23.688973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:23.714286 1136586 cri.go:89] found id: ""
	I1208 01:57:23.714312 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.714320 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:23.714329 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:23.714345 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:23.742945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:23.742972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.798260 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:23.798300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:23.813312 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:23.813340 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:23.892723 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:23.892748 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:23.892762 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.422664 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.433380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:26.433455 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:26.465015 1136586 cri.go:89] found id: ""
	I1208 01:57:26.465039 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.465048 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:26.465055 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:26.465113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:26.493403 1136586 cri.go:89] found id: ""
	I1208 01:57:26.493429 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.493438 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:26.493449 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:26.493537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:26.519773 1136586 cri.go:89] found id: ""
	I1208 01:57:26.519799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.519814 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:26.519821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:26.519883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:26.548992 1136586 cri.go:89] found id: ""
	I1208 01:57:26.549025 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.549037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:26.549047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:26.549127 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:26.574005 1136586 cri.go:89] found id: ""
	I1208 01:57:26.574031 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.574041 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:26.574047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:26.574111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:26.609416 1136586 cri.go:89] found id: ""
	I1208 01:57:26.609443 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.609452 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:26.609459 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:26.609517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:26.640996 1136586 cri.go:89] found id: ""
	I1208 01:57:26.641021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.641031 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:26.641037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:26.641096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:26.667832 1136586 cri.go:89] found id: ""
	I1208 01:57:26.667861 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.667870 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:26.667880 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:26.667911 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:26.727920 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:26.727958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:26.743134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:26.743167 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:26.805654 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:26.805676 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:26.805689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.833117 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:26.833153 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.374479 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.385263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:29.385343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:29.411850 1136586 cri.go:89] found id: ""
	I1208 01:57:29.411881 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.411890 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:29.411897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:29.411957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:29.436577 1136586 cri.go:89] found id: ""
	I1208 01:57:29.436650 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.436667 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:29.436674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:29.436741 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:29.461265 1136586 cri.go:89] found id: ""
	I1208 01:57:29.461287 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.461296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:29.461302 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:29.461375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:29.485998 1136586 cri.go:89] found id: ""
	I1208 01:57:29.486024 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.486033 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:29.486039 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:29.486102 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:29.515456 1136586 cri.go:89] found id: ""
	I1208 01:57:29.515482 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.515491 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:29.515498 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:29.515574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:29.540631 1136586 cri.go:89] found id: ""
	I1208 01:57:29.540658 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.540667 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:29.540674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:29.540771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:29.569112 1136586 cri.go:89] found id: ""
	I1208 01:57:29.569156 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.569182 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:29.569194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:29.569276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:29.601158 1136586 cri.go:89] found id: ""
	I1208 01:57:29.601182 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.601192 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:29.601201 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:29.601213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:29.681907 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:29.681933 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:29.681946 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:29.707746 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:29.707781 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.740008 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:29.740036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:29.795859 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:29.795893 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.311192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.322374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:32.322487 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:32.352628 1136586 cri.go:89] found id: ""
	I1208 01:57:32.352653 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.352662 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:32.352668 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:32.352727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:32.379283 1136586 cri.go:89] found id: ""
	I1208 01:57:32.379308 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.379317 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:32.379323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:32.379383 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:32.405884 1136586 cri.go:89] found id: ""
	I1208 01:57:32.405911 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.405919 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:32.405926 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:32.405985 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:32.431914 1136586 cri.go:89] found id: ""
	I1208 01:57:32.431939 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.431948 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:32.431958 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:32.432019 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:32.456763 1136586 cri.go:89] found id: ""
	I1208 01:57:32.456791 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.456799 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:32.456806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:32.456868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:32.482420 1136586 cri.go:89] found id: ""
	I1208 01:57:32.482467 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.482476 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:32.482483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:32.482550 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:32.507167 1136586 cri.go:89] found id: ""
	I1208 01:57:32.507201 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.507210 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:32.507218 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:32.507281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:32.532583 1136586 cri.go:89] found id: ""
	I1208 01:57:32.532612 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.532621 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:32.532630 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:32.532642 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:32.562135 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:32.562163 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:32.619510 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:32.619544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.636767 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:32.636845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:32.721264 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:32.721287 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:32.721300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:35.247026 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.260135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:35.260203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:35.288106 1136586 cri.go:89] found id: ""
	I1208 01:57:35.288130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.288138 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:35.288146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:35.288206 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:35.314646 1136586 cri.go:89] found id: ""
	I1208 01:57:35.314672 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.314682 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:35.314689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:35.314777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:35.342658 1136586 cri.go:89] found id: ""
	I1208 01:57:35.342685 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.342693 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:35.342700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:35.342762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:35.367839 1136586 cri.go:89] found id: ""
	I1208 01:57:35.367862 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.367870 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:35.367877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:35.367937 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:35.392345 1136586 cri.go:89] found id: ""
	I1208 01:57:35.392419 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.392449 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:35.392461 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:35.392525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:35.417214 1136586 cri.go:89] found id: ""
	I1208 01:57:35.417241 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.417250 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:35.417257 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:35.417318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:35.444512 1136586 cri.go:89] found id: ""
	I1208 01:57:35.444538 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.444546 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:35.444556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:35.444614 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:35.470153 1136586 cri.go:89] found id: ""
	I1208 01:57:35.470227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.470250 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:35.470272 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:35.470310 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:35.497905 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:35.497934 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:35.553331 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:35.553369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:35.568215 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:35.568246 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:35.665180 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:35.665205 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:35.665219 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.193386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.204636 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:38.204720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:38.230690 1136586 cri.go:89] found id: ""
	I1208 01:57:38.230717 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.230726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:38.230732 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:38.230791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:38.255363 1136586 cri.go:89] found id: ""
	I1208 01:57:38.255385 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.255394 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:38.255401 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:38.255460 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:38.282875 1136586 cri.go:89] found id: ""
	I1208 01:57:38.282899 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.282907 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:38.282914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:38.282980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:38.308397 1136586 cri.go:89] found id: ""
	I1208 01:57:38.308422 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.308437 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:38.308443 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:38.308505 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:38.334844 1136586 cri.go:89] found id: ""
	I1208 01:57:38.334871 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.334880 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:38.334886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:38.334945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:38.360635 1136586 cri.go:89] found id: ""
	I1208 01:57:38.360659 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.360669 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:38.360676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:38.360737 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:38.385673 1136586 cri.go:89] found id: ""
	I1208 01:57:38.385702 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.385710 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:38.385717 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:38.385776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:38.410525 1136586 cri.go:89] found id: ""
	I1208 01:57:38.410560 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.410569 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:38.410578 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:38.410589 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:38.467839 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:38.467874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:38.482720 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:38.482748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:38.547244 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:38.547268 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:38.547282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.573312 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:38.573350 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.116290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:41.132190 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:41.132273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:41.164024 1136586 cri.go:89] found id: ""
	I1208 01:57:41.164049 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.164058 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:41.164064 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:41.164126 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:41.190343 1136586 cri.go:89] found id: ""
	I1208 01:57:41.190380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.190390 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:41.190396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:41.190480 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:41.215567 1136586 cri.go:89] found id: ""
	I1208 01:57:41.215591 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.215600 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:41.215607 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:41.215712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:41.241307 1136586 cri.go:89] found id: ""
	I1208 01:57:41.241380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.241404 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:41.241424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:41.241510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:41.266598 1136586 cri.go:89] found id: ""
	I1208 01:57:41.266666 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.266682 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:41.266689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:41.266748 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:41.292745 1136586 cri.go:89] found id: ""
	I1208 01:57:41.292806 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.292833 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:41.292851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:41.292947 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:41.322477 1136586 cri.go:89] found id: ""
	I1208 01:57:41.322503 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.322528 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:41.322534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:41.322598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:41.348001 1136586 cri.go:89] found id: ""
	I1208 01:57:41.348028 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.348037 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:41.348047 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:41.348059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:41.413651 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:41.413677 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:41.413690 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:41.443591 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:41.443637 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.475807 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:41.475839 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:41.531946 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:41.531985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.047381 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.058560 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:44.058632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:44.086946 1136586 cri.go:89] found id: ""
	I1208 01:57:44.086974 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.086983 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:44.086990 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:44.087055 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:44.119808 1136586 cri.go:89] found id: ""
	I1208 01:57:44.119837 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.119846 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:44.119853 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:44.119914 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:44.151166 1136586 cri.go:89] found id: ""
	I1208 01:57:44.151189 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.151197 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:44.151204 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:44.151266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:44.179208 1136586 cri.go:89] found id: ""
	I1208 01:57:44.179232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.179240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:44.179247 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:44.179307 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:44.204931 1136586 cri.go:89] found id: ""
	I1208 01:57:44.204957 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.204967 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:44.204973 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:44.205086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:44.233222 1136586 cri.go:89] found id: ""
	I1208 01:57:44.233263 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.233289 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:44.233303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:44.233381 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:44.258112 1136586 cri.go:89] found id: ""
	I1208 01:57:44.258180 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.258204 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:44.258225 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:44.258301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:44.282317 1136586 cri.go:89] found id: ""
	I1208 01:57:44.282339 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.282348 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:44.282358 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:44.282369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:44.337431 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:44.337465 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.352560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:44.352633 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:44.416710 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:44.416734 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:44.416745 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:44.443231 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:44.443264 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:46.971715 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.982590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:46.982716 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:47.013622 1136586 cri.go:89] found id: ""
	I1208 01:57:47.013655 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.013665 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:47.013689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:47.013773 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:47.039262 1136586 cri.go:89] found id: ""
	I1208 01:57:47.039288 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.039298 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:47.039305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:47.039369 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:47.064571 1136586 cri.go:89] found id: ""
	I1208 01:57:47.064597 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.064606 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:47.064612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:47.064671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:47.103360 1136586 cri.go:89] found id: ""
	I1208 01:57:47.103428 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.103452 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:47.103471 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:47.103558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:47.137446 1136586 cri.go:89] found id: ""
	I1208 01:57:47.137514 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.137537 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:47.137556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:47.137643 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:47.167484 1136586 cri.go:89] found id: ""
	I1208 01:57:47.167507 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.167515 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:47.167522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:47.167581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:47.198040 1136586 cri.go:89] found id: ""
	I1208 01:57:47.198072 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.198082 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:47.198088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:47.198155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:47.222585 1136586 cri.go:89] found id: ""
	I1208 01:57:47.222609 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.222618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:47.222635 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:47.222648 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:47.253438 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:47.253468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:47.312655 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:47.312692 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:47.328066 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:47.328146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:47.396328 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:47.396351 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:47.396365 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:49.922587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.933241 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.933357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.957944 1136586 cri.go:89] found id: ""
	I1208 01:57:49.957967 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.957976 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.957983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.958043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.983531 1136586 cri.go:89] found id: ""
	I1208 01:57:49.983556 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.983565 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.983573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.983634 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:50.014921 1136586 cri.go:89] found id: ""
	I1208 01:57:50.014948 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.014958 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:50.014965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:50.015054 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:50.051300 1136586 cri.go:89] found id: ""
	I1208 01:57:50.051356 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.051365 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:50.051373 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:50.051439 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:50.078205 1136586 cri.go:89] found id: ""
	I1208 01:57:50.078232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.078242 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:50.078248 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:50.078313 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:50.116415 1136586 cri.go:89] found id: ""
	I1208 01:57:50.116472 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.116482 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:50.116489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:50.116549 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:50.152924 1136586 cri.go:89] found id: ""
	I1208 01:57:50.152953 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.152962 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:50.152971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:50.153034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:50.183266 1136586 cri.go:89] found id: ""
	I1208 01:57:50.183303 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.183313 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:50.183323 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:50.183339 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:50.219490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:50.219518 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:50.278125 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:50.278160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:50.293360 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:50.293392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:50.361099 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:50.361124 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:50.361137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:52.887762 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:52.898605 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:52.898684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:52.924862 1136586 cri.go:89] found id: ""
	I1208 01:57:52.924888 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.924898 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:52.924904 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:52.924967 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.953738 1136586 cri.go:89] found id: ""
	I1208 01:57:52.953766 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.953775 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.953781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.953841 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.979112 1136586 cri.go:89] found id: ""
	I1208 01:57:52.979135 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.979143 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.979156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.979220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:53.010105 1136586 cri.go:89] found id: ""
	I1208 01:57:53.010136 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.010146 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:53.010153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:53.010224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:53.040709 1136586 cri.go:89] found id: ""
	I1208 01:57:53.040737 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.040746 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:53.040759 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:53.040820 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:53.066591 1136586 cri.go:89] found id: ""
	I1208 01:57:53.066615 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.066624 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:53.066631 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:53.066690 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:53.103691 1136586 cri.go:89] found id: ""
	I1208 01:57:53.103721 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.103730 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:53.103737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:53.103796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:53.135825 1136586 cri.go:89] found id: ""
	I1208 01:57:53.135860 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.135869 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:53.135879 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:53.135892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:53.154871 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:53.154897 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:53.223770 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.223803 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:53.223818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:53.248879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:53.248912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:53.278989 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:53.279015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.836344 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:55.851014 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:55.851088 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:55.880945 1136586 cri.go:89] found id: ""
	I1208 01:57:55.880968 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.880977 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:55.880983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:55.881047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.918324 1136586 cri.go:89] found id: ""
	I1208 01:57:55.918348 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.918357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.918363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.918420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.943772 1136586 cri.go:89] found id: ""
	I1208 01:57:55.943799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.943808 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.943814 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.943872 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.968672 1136586 cri.go:89] found id: ""
	I1208 01:57:55.968695 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.968705 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.968711 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.968772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.993546 1136586 cri.go:89] found id: ""
	I1208 01:57:55.993573 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.993582 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.993588 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.993648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:56.026891 1136586 cri.go:89] found id: ""
	I1208 01:57:56.026916 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.026924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:56.026931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:56.026998 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:56.053302 1136586 cri.go:89] found id: ""
	I1208 01:57:56.053334 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.053344 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:56.053356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:56.053468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:56.079706 1136586 cri.go:89] found id: ""
	I1208 01:57:56.079733 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.079741 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:56.079750 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:56.079761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:56.142320 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:56.142357 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:56.157995 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:56.158067 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:56.221039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:56.221063 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:56.221077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:56.247019 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:56.247058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.775233 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:58.785596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:58.785682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:58.809955 1136586 cri.go:89] found id: ""
	I1208 01:57:58.809986 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.809996 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:58.810002 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:58.810061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:58.835423 1136586 cri.go:89] found id: ""
	I1208 01:57:58.835447 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.835456 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:58.835462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:58.835524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.867905 1136586 cri.go:89] found id: ""
	I1208 01:57:58.867928 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.867937 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.867943 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.868003 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.896767 1136586 cri.go:89] found id: ""
	I1208 01:57:58.896794 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.896803 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.896810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.896868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.926611 1136586 cri.go:89] found id: ""
	I1208 01:57:58.926633 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.926642 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.926648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.926707 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.954977 1136586 cri.go:89] found id: ""
	I1208 01:57:58.955001 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.955010 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.955016 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.955075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.984186 1136586 cri.go:89] found id: ""
	I1208 01:57:58.984209 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.984218 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.984224 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.984286 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:59.011291 1136586 cri.go:89] found id: ""
	I1208 01:57:59.011314 1136586 logs.go:282] 0 containers: []
	W1208 01:57:59.011323 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:59.011333 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:59.011346 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:59.067486 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:59.067520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:59.082307 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:59.082334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:59.162802 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:59.162826 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:59.162838 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:59.187405 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:59.187437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:01.720540 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:01.731197 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:01.731266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:01.756392 1136586 cri.go:89] found id: ""
	I1208 01:58:01.756414 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.756431 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:01.756438 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:01.756504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:01.782980 1136586 cri.go:89] found id: ""
	I1208 01:58:01.783050 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.783074 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:01.783099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:01.783180 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:01.808911 1136586 cri.go:89] found id: ""
	I1208 01:58:01.808947 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.808957 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:01.808964 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:01.809032 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.833417 1136586 cri.go:89] found id: ""
	I1208 01:58:01.833490 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.833514 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.833534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.833644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.863178 1136586 cri.go:89] found id: ""
	I1208 01:58:01.863255 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.863277 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.863296 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.863391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.893466 1136586 cri.go:89] found id: ""
	I1208 01:58:01.893540 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.893562 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.893582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.893669 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.927969 1136586 cri.go:89] found id: ""
	I1208 01:58:01.928046 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.928060 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.928067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.928137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.954102 1136586 cri.go:89] found id: ""
	I1208 01:58:01.954130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.954141 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.954150 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.954162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:02.011065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:02.011103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:02.028187 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:02.028220 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:02.092492 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:02.092518 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:02.092532 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:02.123344 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:02.123377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.657423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:04.669705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:04.669794 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:04.696818 1136586 cri.go:89] found id: ""
	I1208 01:58:04.696848 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.696857 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:04.696864 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:04.696973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:04.723929 1136586 cri.go:89] found id: ""
	I1208 01:58:04.723951 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.723960 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:04.723967 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:04.724028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:04.749688 1136586 cri.go:89] found id: ""
	I1208 01:58:04.749712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.749721 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:04.749727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:04.749790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:04.780181 1136586 cri.go:89] found id: ""
	I1208 01:58:04.780212 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.780223 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:04.780230 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:04.780310 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.805904 1136586 cri.go:89] found id: ""
	I1208 01:58:04.805930 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.805941 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.805947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.806004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.830657 1136586 cri.go:89] found id: ""
	I1208 01:58:04.830682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.830692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.830699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.830765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.870065 1136586 cri.go:89] found id: ""
	I1208 01:58:04.870130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.870152 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.870170 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.870263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.898118 1136586 cri.go:89] found id: ""
	I1208 01:58:04.898185 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.898207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.898228 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.898266 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.931407 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.931433 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.987787 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.987825 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:05.003245 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:05.003331 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:05.079158 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:05.079184 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:05.079196 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:07.607089 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:07.617881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:07.617954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:07.643290 1136586 cri.go:89] found id: ""
	I1208 01:58:07.643356 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.643378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:07.643396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:07.643483 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:07.668986 1136586 cri.go:89] found id: ""
	I1208 01:58:07.669054 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.669078 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:07.669099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:07.669190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:07.703052 1136586 cri.go:89] found id: ""
	I1208 01:58:07.703077 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.703086 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:07.703093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:07.703153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:07.730752 1136586 cri.go:89] found id: ""
	I1208 01:58:07.730780 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.730791 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:07.730801 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:07.730864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.757395 1136586 cri.go:89] found id: ""
	I1208 01:58:07.757420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.757429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.757442 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.757504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.781922 1136586 cri.go:89] found id: ""
	I1208 01:58:07.781946 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.781955 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.781961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.782020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.806746 1136586 cri.go:89] found id: ""
	I1208 01:58:07.806769 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.806778 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.806785 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.806855 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.835050 1136586 cri.go:89] found id: ""
	I1208 01:58:07.835079 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.835088 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.835097 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.835110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.898132 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.898165 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.918936 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.918964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.984291 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.984315 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:07.984328 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:08.010075 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:08.010113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.540471 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:10.551266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:10.551338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:10.577175 1136586 cri.go:89] found id: ""
	I1208 01:58:10.577202 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.577212 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:10.577219 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:10.577281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:10.602532 1136586 cri.go:89] found id: ""
	I1208 01:58:10.602567 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.602577 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:10.602584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:10.602646 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:10.628758 1136586 cri.go:89] found id: ""
	I1208 01:58:10.628782 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.628790 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:10.628796 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:10.628860 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:10.658744 1136586 cri.go:89] found id: ""
	I1208 01:58:10.658767 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.658776 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:10.658783 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:10.658848 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:10.687442 1136586 cri.go:89] found id: ""
	I1208 01:58:10.687466 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.687475 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:10.687483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:10.687547 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.713454 1136586 cri.go:89] found id: ""
	I1208 01:58:10.713527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.713551 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.713573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.713662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.738872 1136586 cri.go:89] found id: ""
	I1208 01:58:10.738896 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.738905 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.738912 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.739073 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.764935 1136586 cri.go:89] found id: ""
	I1208 01:58:10.764962 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.764972 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.764981 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.764995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.822530 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.822568 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.837607 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.837635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.917003 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:10.917024 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:10.917036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:10.943077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.943113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.473561 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:13.484592 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:13.484660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:13.511440 1136586 cri.go:89] found id: ""
	I1208 01:58:13.511463 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.511472 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:13.511478 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:13.511541 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:13.535634 1136586 cri.go:89] found id: ""
	I1208 01:58:13.535659 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.535668 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:13.535675 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:13.535734 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:13.560688 1136586 cri.go:89] found id: ""
	I1208 01:58:13.560712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.560720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:13.560727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:13.560791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:13.586137 1136586 cri.go:89] found id: ""
	I1208 01:58:13.586217 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.586240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:13.586261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:13.586354 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:13.612353 1136586 cri.go:89] found id: ""
	I1208 01:58:13.612378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.612388 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:13.612394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:13.612466 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:13.642171 1136586 cri.go:89] found id: ""
	I1208 01:58:13.642198 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.642208 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:13.642215 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:13.642276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:13.668409 1136586 cri.go:89] found id: ""
	I1208 01:58:13.668440 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.668448 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:13.668455 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:13.668537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.701198 1136586 cri.go:89] found id: ""
	I1208 01:58:13.701223 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.701232 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.701240 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.701252 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.758303 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.758338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.773305 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.773343 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.842494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.842521 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:13.842537 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:13.871092 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.871129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.410612 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:16.421252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:16.421335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:16.448846 1136586 cri.go:89] found id: ""
	I1208 01:58:16.448872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.448880 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:16.448887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:16.448954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:16.478943 1136586 cri.go:89] found id: ""
	I1208 01:58:16.478968 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.478977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:16.478984 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:16.479044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:16.504203 1136586 cri.go:89] found id: ""
	I1208 01:58:16.504230 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.504239 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:16.504245 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:16.504305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:16.531210 1136586 cri.go:89] found id: ""
	I1208 01:58:16.531238 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.531247 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:16.531254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:16.531343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:16.561091 1136586 cri.go:89] found id: ""
	I1208 01:58:16.561122 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.561130 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:16.561137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:16.561199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:16.586402 1136586 cri.go:89] found id: ""
	I1208 01:58:16.586427 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.586435 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:16.586462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:16.586524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:16.611837 1136586 cri.go:89] found id: ""
	I1208 01:58:16.611863 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.611873 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:16.611879 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:16.611961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:16.637357 1136586 cri.go:89] found id: ""
	I1208 01:58:16.637399 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.637408 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:16.637434 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.637468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.692659 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.692739 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:16.709626 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:16.709655 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:16.785738 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.785761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:16.785774 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:16.811061 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.811096 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:19.346587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:19.359091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:19.359159 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:19.395510 1136586 cri.go:89] found id: ""
	I1208 01:58:19.395536 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.395545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:19.395551 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:19.395609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:19.423019 1136586 cri.go:89] found id: ""
	I1208 01:58:19.423044 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.423053 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:19.423059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:19.423120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:19.449460 1136586 cri.go:89] found id: ""
	I1208 01:58:19.449487 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.449496 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:19.449503 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:19.449574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:19.476285 1136586 cri.go:89] found id: ""
	I1208 01:58:19.476311 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.476320 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:19.476327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:19.476387 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:19.504576 1136586 cri.go:89] found id: ""
	I1208 01:58:19.504603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.504613 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:19.504620 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:19.504682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:19.530968 1136586 cri.go:89] found id: ""
	I1208 01:58:19.530994 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.531015 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:19.531023 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:19.531092 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:19.555468 1136586 cri.go:89] found id: ""
	I1208 01:58:19.555492 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.555501 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:19.555508 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:19.555571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:19.580667 1136586 cri.go:89] found id: ""
	I1208 01:58:19.580703 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.580716 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:19.580726 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:19.580737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.638717 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.638754 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.653903 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.653935 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.721039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.721058 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:19.721071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:19.747016 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.747054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.280191 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:22.290698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:22.290771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:22.319983 1136586 cri.go:89] found id: ""
	I1208 01:58:22.320007 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.320016 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:22.320022 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:22.320084 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:22.349912 1136586 cri.go:89] found id: ""
	I1208 01:58:22.349939 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.349949 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:22.349955 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:22.350016 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:22.381227 1136586 cri.go:89] found id: ""
	I1208 01:58:22.381253 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.381262 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:22.381269 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:22.381327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:22.412055 1136586 cri.go:89] found id: ""
	I1208 01:58:22.412130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.412143 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:22.412150 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:22.412219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:22.437094 1136586 cri.go:89] found id: ""
	I1208 01:58:22.437169 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.437193 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:22.437214 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:22.437338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:22.466779 1136586 cri.go:89] found id: ""
	I1208 01:58:22.466809 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.466817 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:22.466824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:22.466888 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:22.492472 1136586 cri.go:89] found id: ""
	I1208 01:58:22.492555 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.492580 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:22.492599 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:22.492683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:22.517817 1136586 cri.go:89] found id: ""
	I1208 01:58:22.517865 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.517875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:22.517884 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:22.517896 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:22.533468 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:22.533495 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.600107 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.600132 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:22.600145 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:22.625768 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.625805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.654249 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:22.654334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.216756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:25.228093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:25.228171 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:25.254793 1136586 cri.go:89] found id: ""
	I1208 01:58:25.254820 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.254840 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:25.254848 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:25.254911 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:25.280729 1136586 cri.go:89] found id: ""
	I1208 01:58:25.280756 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.280765 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:25.280772 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:25.280856 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:25.306714 1136586 cri.go:89] found id: ""
	I1208 01:58:25.306786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.306802 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:25.306809 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:25.306883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:25.333920 1136586 cri.go:89] found id: ""
	I1208 01:58:25.333955 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.333964 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:25.333971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:25.334044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:25.361369 1136586 cri.go:89] found id: ""
	I1208 01:58:25.361396 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.361405 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:25.361412 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:25.361486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:25.392931 1136586 cri.go:89] found id: ""
	I1208 01:58:25.392958 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.392967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:25.392974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:25.393046 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:25.423143 1136586 cri.go:89] found id: ""
	I1208 01:58:25.423168 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.423177 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:25.423183 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:25.423245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:25.452795 1136586 cri.go:89] found id: ""
	I1208 01:58:25.452872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.452888 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:25.452899 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:25.452913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:25.479544 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.479585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:25.510747 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:25.510777 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.566401 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:25.566437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:25.581786 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:25.581816 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:25.653146 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.153984 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:28.164723 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:28.164793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:28.188760 1136586 cri.go:89] found id: ""
	I1208 01:58:28.188786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.188796 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:28.188803 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:28.188865 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:28.213011 1136586 cri.go:89] found id: ""
	I1208 01:58:28.213037 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.213046 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:28.213053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:28.213114 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:28.237473 1136586 cri.go:89] found id: ""
	I1208 01:58:28.237547 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.237559 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:28.237566 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:28.237692 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:28.264353 1136586 cri.go:89] found id: ""
	I1208 01:58:28.264378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.264387 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:28.264394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:28.264478 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:28.289216 1136586 cri.go:89] found id: ""
	I1208 01:58:28.289250 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.289259 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:28.289265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:28.289332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:28.314397 1136586 cri.go:89] found id: ""
	I1208 01:58:28.314431 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.314440 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:28.314480 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:28.314553 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:28.339256 1136586 cri.go:89] found id: ""
	I1208 01:58:28.339290 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.339299 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:28.339305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:28.339372 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:28.376790 1136586 cri.go:89] found id: ""
	I1208 01:58:28.376824 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.376833 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:28.376842 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:28.376854 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:28.412562 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:28.412597 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:28.468784 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:28.468818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:28.483513 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:28.483539 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:28.548999 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.549069 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:28.549088 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.074358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:31.085483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:31.085557 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:31.113378 1136586 cri.go:89] found id: ""
	I1208 01:58:31.113404 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.113413 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:31.113419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:31.113486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:31.151500 1136586 cri.go:89] found id: ""
	I1208 01:58:31.151527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.151537 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:31.151544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:31.151606 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:31.198664 1136586 cri.go:89] found id: ""
	I1208 01:58:31.198692 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.198701 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:31.198708 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:31.198770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:31.225073 1136586 cri.go:89] found id: ""
	I1208 01:58:31.225100 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.225109 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:31.225115 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:31.225178 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:31.253221 1136586 cri.go:89] found id: ""
	I1208 01:58:31.253248 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.253256 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:31.253263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:31.253328 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:31.278685 1136586 cri.go:89] found id: ""
	I1208 01:58:31.278715 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.278724 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:31.278731 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:31.278793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:31.308014 1136586 cri.go:89] found id: ""
	I1208 01:58:31.308040 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.308050 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:31.308057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:31.308118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:31.333618 1136586 cri.go:89] found id: ""
	I1208 01:58:31.333646 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.333655 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:31.333666 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:31.333677 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.360688 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:31.360767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:31.400673 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:31.400748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:31.458405 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:31.458467 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:31.473371 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:31.473403 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:31.535352 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.035643 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:34.047071 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:34.047236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:34.072671 1136586 cri.go:89] found id: ""
	I1208 01:58:34.072696 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.072705 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:34.072712 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:34.072776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:34.102807 1136586 cri.go:89] found id: ""
	I1208 01:58:34.102835 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.102844 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:34.102851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:34.102910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:34.129970 1136586 cri.go:89] found id: ""
	I1208 01:58:34.129998 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.130007 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:34.130017 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:34.130077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:34.156982 1136586 cri.go:89] found id: ""
	I1208 01:58:34.157009 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.157019 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:34.157026 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:34.157086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:34.181976 1136586 cri.go:89] found id: ""
	I1208 01:58:34.182003 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.182013 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:34.182020 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:34.182081 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:34.206537 1136586 cri.go:89] found id: ""
	I1208 01:58:34.206615 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.206630 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:34.206638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:34.206699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:34.236167 1136586 cri.go:89] found id: ""
	I1208 01:58:34.236192 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.236201 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:34.236210 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:34.236270 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:34.262308 1136586 cri.go:89] found id: ""
	I1208 01:58:34.262332 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.262341 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:34.262351 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:34.262363 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:34.317558 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:34.317593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:34.332448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:34.332475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:34.412027 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.412050 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:34.412062 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:34.438062 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:34.438097 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.967795 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.978660 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.978730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:37.012757 1136586 cri.go:89] found id: ""
	I1208 01:58:37.012787 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.012797 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:37.012804 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:37.012878 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:37.041663 1136586 cri.go:89] found id: ""
	I1208 01:58:37.041685 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.041693 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:37.041700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:37.041758 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:37.066610 1136586 cri.go:89] found id: ""
	I1208 01:58:37.066694 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.066716 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:37.066734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:37.066844 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:37.094085 1136586 cri.go:89] found id: ""
	I1208 01:58:37.094162 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.094187 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:37.094209 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:37.094319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:37.132780 1136586 cri.go:89] found id: ""
	I1208 01:58:37.132864 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.132886 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:37.132905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:37.133017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:37.169263 1136586 cri.go:89] found id: ""
	I1208 01:58:37.169340 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.169365 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:37.169386 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:37.169498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:37.194196 1136586 cri.go:89] found id: ""
	I1208 01:58:37.194275 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.194300 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:37.194319 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:37.194404 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:37.219299 1136586 cri.go:89] found id: ""
	I1208 01:58:37.219378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.219415 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:37.219442 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:37.219469 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:37.274745 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:37.274782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:37.289751 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:37.289779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:37.363255 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:37.363297 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:37.363316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:37.401496 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:37.401554 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:39.942202 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:39.953239 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.953312 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.978920 1136586 cri.go:89] found id: ""
	I1208 01:58:39.978943 1136586 logs.go:282] 0 containers: []
	W1208 01:58:39.978952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.978959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.979017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:40.025284 1136586 cri.go:89] found id: ""
	I1208 01:58:40.025316 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.025343 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:40.025352 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:40.025427 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:40.067843 1136586 cri.go:89] found id: ""
	I1208 01:58:40.067869 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.067879 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:40.067886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:40.067952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:40.102669 1136586 cri.go:89] found id: ""
	I1208 01:58:40.102759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.102785 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:40.102806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:40.102923 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:40.150768 1136586 cri.go:89] found id: ""
	I1208 01:58:40.150799 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.150809 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:40.150815 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:40.150881 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:40.179334 1136586 cri.go:89] found id: ""
	I1208 01:58:40.179362 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.179373 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:40.179382 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:40.179453 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:40.208035 1136586 cri.go:89] found id: ""
	I1208 01:58:40.208063 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.208072 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:40.208079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:40.208144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:40.238244 1136586 cri.go:89] found id: ""
	I1208 01:58:40.238286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.238296 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:40.238306 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:40.238320 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:40.264240 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:40.264279 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:40.295875 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:40.295900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:40.355993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:40.356087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:40.374494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:40.374575 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:40.448504 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.948778 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.959677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.959745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.984449 1136586 cri.go:89] found id: ""
	I1208 01:58:42.984474 1136586 logs.go:282] 0 containers: []
	W1208 01:58:42.984483 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.984489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.984555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:43.015138 1136586 cri.go:89] found id: ""
	I1208 01:58:43.015163 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.015172 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:43.015178 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:43.015242 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:43.040581 1136586 cri.go:89] found id: ""
	I1208 01:58:43.040608 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.040617 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:43.040623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:43.040685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:43.066316 1136586 cri.go:89] found id: ""
	I1208 01:58:43.066345 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.066367 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:43.066374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:43.066484 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:43.095034 1136586 cri.go:89] found id: ""
	I1208 01:58:43.095062 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.095071 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:43.095077 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:43.095137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:43.129297 1136586 cri.go:89] found id: ""
	I1208 01:58:43.129323 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.129333 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:43.129340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:43.129413 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:43.160843 1136586 cri.go:89] found id: ""
	I1208 01:58:43.160912 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.160929 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:43.160937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:43.161012 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:43.189017 1136586 cri.go:89] found id: ""
	I1208 01:58:43.189043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.189051 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:43.189060 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:43.189071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:43.245153 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:43.245189 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:43.260337 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:43.260380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:43.329966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:43.329985 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:43.329998 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:43.357975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:43.358058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.892416 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.902821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.902893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.929257 1136586 cri.go:89] found id: ""
	I1208 01:58:45.929283 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.929292 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.929299 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.929357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.954817 1136586 cri.go:89] found id: ""
	I1208 01:58:45.954851 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.954861 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.954867 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.954928 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.980153 1136586 cri.go:89] found id: ""
	I1208 01:58:45.980183 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.980196 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.980202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.980263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:46.009369 1136586 cri.go:89] found id: ""
	I1208 01:58:46.009398 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.009408 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:46.009415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:46.009555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:46.035686 1136586 cri.go:89] found id: ""
	I1208 01:58:46.035713 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.035736 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:46.035743 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:46.035815 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:46.065295 1136586 cri.go:89] found id: ""
	I1208 01:58:46.065327 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.065337 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:46.065344 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:46.065414 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:46.104678 1136586 cri.go:89] found id: ""
	I1208 01:58:46.104746 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.104769 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:46.104790 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:46.104877 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:46.134606 1136586 cri.go:89] found id: ""
	I1208 01:58:46.134682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.134705 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:46.134727 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:46.134766 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:46.198135 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:46.198171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:46.213155 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:46.213180 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:46.287421 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:46.287443 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:46.287456 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:46.313370 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:46.313405 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:48.849489 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.861044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.861117 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.886203 1136586 cri.go:89] found id: ""
	I1208 01:58:48.886227 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.886237 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.886243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.886305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.911152 1136586 cri.go:89] found id: ""
	I1208 01:58:48.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.911187 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.911193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.911275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.935595 1136586 cri.go:89] found id: ""
	I1208 01:58:48.935620 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.935629 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.935635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.935750 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.959533 1136586 cri.go:89] found id: ""
	I1208 01:58:48.959558 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.959566 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.959573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.959631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.985031 1136586 cri.go:89] found id: ""
	I1208 01:58:48.985057 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.985066 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.985073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.985176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:49.014577 1136586 cri.go:89] found id: ""
	I1208 01:58:49.014603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.014612 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:49.014619 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:49.014679 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:49.038952 1136586 cri.go:89] found id: ""
	I1208 01:58:49.038978 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.038987 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:49.038993 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:49.039051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:49.063733 1136586 cri.go:89] found id: ""
	I1208 01:58:49.063759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.063768 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:49.063777 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:49.063788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:49.097818 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:49.097852 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:49.161476 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:49.161513 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:49.178959 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:49.178995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:49.243404 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:49.243465 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:49.243502 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:51.768803 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.780779 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.780851 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.808733 1136586 cri.go:89] found id: ""
	I1208 01:58:51.808759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.808768 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.808775 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.808846 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.835560 1136586 cri.go:89] found id: ""
	I1208 01:58:51.835587 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.835599 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.835606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.835670 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.860461 1136586 cri.go:89] found id: ""
	I1208 01:58:51.860485 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.860494 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.860501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.860562 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.885253 1136586 cri.go:89] found id: ""
	I1208 01:58:51.885286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.885294 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.885303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.885373 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.909393 1136586 cri.go:89] found id: ""
	I1208 01:58:51.909420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.909429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.909436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.909498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.934211 1136586 cri.go:89] found id: ""
	I1208 01:58:51.934245 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.934254 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.934261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.934331 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.958861 1136586 cri.go:89] found id: ""
	I1208 01:58:51.958887 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.958896 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.958903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.958961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.984069 1136586 cri.go:89] found id: ""
	I1208 01:58:51.984095 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.984106 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.984115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.984146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:51.999081 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.999109 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:52.068304 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:52.068327 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:52.068341 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:52.094374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:52.094481 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:52.127916 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:52.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.695208 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.706109 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.706218 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.731787 1136586 cri.go:89] found id: ""
	I1208 01:58:54.731814 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.731823 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.731835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.731895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.760606 1136586 cri.go:89] found id: ""
	I1208 01:58:54.760631 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.760639 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.760646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.760706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.786598 1136586 cri.go:89] found id: ""
	I1208 01:58:54.786626 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.786635 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.786641 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.786699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.816536 1136586 cri.go:89] found id: ""
	I1208 01:58:54.816562 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.816572 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.816579 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.816641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.845022 1136586 cri.go:89] found id: ""
	I1208 01:58:54.845048 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.845056 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.845063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.845125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.870700 1136586 cri.go:89] found id: ""
	I1208 01:58:54.870725 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.870734 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.870741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.870799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.899897 1136586 cri.go:89] found id: ""
	I1208 01:58:54.899923 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.899934 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.899941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.900002 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.928551 1136586 cri.go:89] found id: ""
	I1208 01:58:54.928575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.928584 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.928593 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.928606 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.991743 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.991769 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:54.991782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:55.022605 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:55.022696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:55.052018 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:55.052044 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:55.112862 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:55.112979 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.628955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.639865 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.639964 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.667931 1136586 cri.go:89] found id: ""
	I1208 01:58:57.667954 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.667962 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.667969 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.668039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.696303 1136586 cri.go:89] found id: ""
	I1208 01:58:57.696328 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.696337 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.696343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.696402 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.720015 1136586 cri.go:89] found id: ""
	I1208 01:58:57.720043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.720052 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.720059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.720120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.748838 1136586 cri.go:89] found id: ""
	I1208 01:58:57.748910 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.748934 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.748953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.749033 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.776554 1136586 cri.go:89] found id: ""
	I1208 01:58:57.776575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.776584 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.776591 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.776648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.800791 1136586 cri.go:89] found id: ""
	I1208 01:58:57.800815 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.800823 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.800830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.800904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.825904 1136586 cri.go:89] found id: ""
	I1208 01:58:57.825975 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.825998 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.826021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.826157 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.853294 1136586 cri.go:89] found id: ""
	I1208 01:58:57.853318 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.853327 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.853336 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.853348 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.868267 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.868292 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.934230 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.934259 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:57.934274 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:57.960735 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.960767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:57.989741 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.989770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.546140 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.557379 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.557497 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.583568 1136586 cri.go:89] found id: ""
	I1208 01:59:00.583595 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.583605 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.583611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.583695 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.615812 1136586 cri.go:89] found id: ""
	I1208 01:59:00.615838 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.615847 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.615856 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.615924 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.642865 1136586 cri.go:89] found id: ""
	I1208 01:59:00.642905 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.642914 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.642921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.642991 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.669343 1136586 cri.go:89] found id: ""
	I1208 01:59:00.669418 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.669434 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.669441 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.669501 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.695611 1136586 cri.go:89] found id: ""
	I1208 01:59:00.695688 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.695702 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.695709 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.695774 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.721947 1136586 cri.go:89] found id: ""
	I1208 01:59:00.721974 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.721983 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.721989 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.722059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.747456 1136586 cri.go:89] found id: ""
	I1208 01:59:00.747485 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.747493 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.747500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.747567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.774802 1136586 cri.go:89] found id: ""
	I1208 01:59:00.774868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.774884 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.774894 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.774906 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.832246 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.832282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.847202 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.847231 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.912820 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.912843 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:00.912856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:00.938649 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.938689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.468247 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.479180 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.479248 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.503843 1136586 cri.go:89] found id: ""
	I1208 01:59:03.503868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.503877 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.503884 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.503946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.533070 1136586 cri.go:89] found id: ""
	I1208 01:59:03.533092 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.533101 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.533107 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.533173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.560639 1136586 cri.go:89] found id: ""
	I1208 01:59:03.560662 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.560670 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.560677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.560738 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.589123 1136586 cri.go:89] found id: ""
	I1208 01:59:03.589150 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.589159 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.589165 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.589225 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.620870 1136586 cri.go:89] found id: ""
	I1208 01:59:03.620893 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.620902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.620908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.620966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.648582 1136586 cri.go:89] found id: ""
	I1208 01:59:03.648607 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.648616 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.648623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.648688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.676092 1136586 cri.go:89] found id: ""
	I1208 01:59:03.676117 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.676125 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.676131 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.676193 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.704985 1136586 cri.go:89] found id: ""
	I1208 01:59:03.705012 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.705021 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.705031 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.705048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.762437 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.762476 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.777354 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.777423 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.852604 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.852630 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:03.852644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:03.877929 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.877964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.407680 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.418391 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.418489 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.448290 1136586 cri.go:89] found id: ""
	I1208 01:59:06.448312 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.448321 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.448327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.448386 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.473926 1136586 cri.go:89] found id: ""
	I1208 01:59:06.473958 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.473967 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.473974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.474037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.499614 1136586 cri.go:89] found id: ""
	I1208 01:59:06.499640 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.499649 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.499656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.499717 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.526871 1136586 cri.go:89] found id: ""
	I1208 01:59:06.526895 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.526904 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.526910 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.526970 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.551675 1136586 cri.go:89] found id: ""
	I1208 01:59:06.551706 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.551716 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.551722 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.551797 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.576680 1136586 cri.go:89] found id: ""
	I1208 01:59:06.576705 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.576714 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.576724 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.576784 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.613884 1136586 cri.go:89] found id: ""
	I1208 01:59:06.613921 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.613930 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.613939 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.614010 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.642583 1136586 cri.go:89] found id: ""
	I1208 01:59:06.642619 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.642629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.642638 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.642650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.709864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.709936 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:06.709962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:06.739423 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.739463 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.767654 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.767684 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.826250 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.826285 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.342623 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.355321 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.355406 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.392040 1136586 cri.go:89] found id: ""
	I1208 01:59:09.392067 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.392080 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.392091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.392161 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.420346 1136586 cri.go:89] found id: ""
	I1208 01:59:09.420372 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.420381 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.420387 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.420454 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.446119 1136586 cri.go:89] found id: ""
	I1208 01:59:09.446145 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.446154 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.446161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.446224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.470836 1136586 cri.go:89] found id: ""
	I1208 01:59:09.470859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.470867 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.470873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.470930 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.495896 1136586 cri.go:89] found id: ""
	I1208 01:59:09.495964 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.495988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.496000 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.496076 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.521109 1136586 cri.go:89] found id: ""
	I1208 01:59:09.521136 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.521145 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.521151 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.521211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.551629 1136586 cri.go:89] found id: ""
	I1208 01:59:09.551652 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.551668 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.551676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.551740 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.577446 1136586 cri.go:89] found id: ""
	I1208 01:59:09.577472 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.577481 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.577490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.577500 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.641466 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.641501 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.657574 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.657600 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.724794 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.724818 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:09.724830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:09.749729 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.749761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.285155 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.296049 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.296118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.325857 1136586 cri.go:89] found id: ""
	I1208 01:59:12.325891 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.325900 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.325907 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.325992 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.363392 1136586 cri.go:89] found id: ""
	I1208 01:59:12.363419 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.363428 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.363434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.363499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.392776 1136586 cri.go:89] found id: ""
	I1208 01:59:12.392803 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.392812 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.392817 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.392884 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.418895 1136586 cri.go:89] found id: ""
	I1208 01:59:12.418919 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.418928 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.418935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.418994 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.444923 1136586 cri.go:89] found id: ""
	I1208 01:59:12.444947 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.444960 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.444966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.445087 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.471912 1136586 cri.go:89] found id: ""
	I1208 01:59:12.471982 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.472006 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.472019 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.472093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.496844 1136586 cri.go:89] found id: ""
	I1208 01:59:12.496877 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.496886 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.496892 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.496966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.523601 1136586 cri.go:89] found id: ""
	I1208 01:59:12.523626 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.523635 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.523645 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.523656 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.581608 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.581646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.598560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.598638 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.666409 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.666430 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:12.666474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:12.692286 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.692321 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.220645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.234496 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.234563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.259957 1136586 cri.go:89] found id: ""
	I1208 01:59:15.259981 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.259991 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.259997 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.260059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.285880 1136586 cri.go:89] found id: ""
	I1208 01:59:15.285906 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.285915 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.285921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.285982 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.311506 1136586 cri.go:89] found id: ""
	I1208 01:59:15.311533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.311545 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.311552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.311615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.336490 1136586 cri.go:89] found id: ""
	I1208 01:59:15.336515 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.336524 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.336531 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.336590 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.365039 1136586 cri.go:89] found id: ""
	I1208 01:59:15.365064 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.365073 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.365079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.365143 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.399712 1136586 cri.go:89] found id: ""
	I1208 01:59:15.399740 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.399749 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.399756 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.399821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.427492 1136586 cri.go:89] found id: ""
	I1208 01:59:15.427517 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.427527 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.427533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.427599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.453022 1136586 cri.go:89] found id: ""
	I1208 01:59:15.453050 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.453059 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.453068 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.453081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.468204 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.468283 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.533761 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.533785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:15.533801 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:15.558879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.558914 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.593769 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.593794 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.158848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.169444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.169517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.195546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.195572 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.195581 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.195587 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.195649 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.220906 1136586 cri.go:89] found id: ""
	I1208 01:59:18.220928 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.220942 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.220948 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.221008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.248546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.248574 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.248584 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.248590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.248652 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.273450 1136586 cri.go:89] found id: ""
	I1208 01:59:18.273477 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.273486 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.273492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.273558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.298830 1136586 cri.go:89] found id: ""
	I1208 01:59:18.298857 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.298867 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.298874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.298936 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.328161 1136586 cri.go:89] found id: ""
	I1208 01:59:18.328182 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.328191 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.328198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.328258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.369715 1136586 cri.go:89] found id: ""
	I1208 01:59:18.369747 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.369756 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.369763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.369822 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.400838 1136586 cri.go:89] found id: ""
	I1208 01:59:18.400865 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.400874 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.400883 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:18.400913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:18.429677 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.429711 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.462210 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.462239 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.517535 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.517571 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.533236 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.533267 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.604338 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.106017 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.116977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.117060 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.145425 1136586 cri.go:89] found id: ""
	I1208 01:59:21.145503 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.145526 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.145544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.145633 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.169097 1136586 cri.go:89] found id: ""
	I1208 01:59:21.169125 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.169134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.169140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.169205 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.195045 1136586 cri.go:89] found id: ""
	I1208 01:59:21.195071 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.195081 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.195088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.195153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.221094 1136586 cri.go:89] found id: ""
	I1208 01:59:21.221128 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.221137 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.221144 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.221213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.247434 1136586 cri.go:89] found id: ""
	I1208 01:59:21.247457 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.247466 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.247472 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.247531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.278610 1136586 cri.go:89] found id: ""
	I1208 01:59:21.278633 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.278642 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.278648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.278712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.304567 1136586 cri.go:89] found id: ""
	I1208 01:59:21.304638 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.304654 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.304662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.304731 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.331211 1136586 cri.go:89] found id: ""
	I1208 01:59:21.331281 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.331304 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.331324 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.331355 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.392474 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.392509 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.413166 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.413192 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.491167 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.491190 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:21.491204 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:21.516454 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.516487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:24.050552 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:24.061833 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:24.061907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:24.089336 1136586 cri.go:89] found id: ""
	I1208 01:59:24.089363 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.089372 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:24.089380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:24.089442 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.115231 1136586 cri.go:89] found id: ""
	I1208 01:59:24.115256 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.115265 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.115272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.115347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.141479 1136586 cri.go:89] found id: ""
	I1208 01:59:24.141505 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.141515 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.141522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.141580 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.166759 1136586 cri.go:89] found id: ""
	I1208 01:59:24.166786 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.166795 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.166802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.166862 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.191431 1136586 cri.go:89] found id: ""
	I1208 01:59:24.191453 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.191462 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.191468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.191525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.216578 1136586 cri.go:89] found id: ""
	I1208 01:59:24.216618 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.216628 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.216635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.216708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.242316 1136586 cri.go:89] found id: ""
	I1208 01:59:24.242343 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.242352 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.242358 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.242420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.267328 1136586 cri.go:89] found id: ""
	I1208 01:59:24.267355 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.267365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.267375 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.267386 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.322866 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.322901 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.337393 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.337420 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.422627 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.422649 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:24.422662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:24.447517 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.447551 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.974915 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.985831 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.985904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:27.015934 1136586 cri.go:89] found id: ""
	I1208 01:59:27.015960 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.015970 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:27.015977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:27.016043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:27.042350 1136586 cri.go:89] found id: ""
	I1208 01:59:27.042376 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.042386 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:27.042400 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:27.042482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:27.068981 1136586 cri.go:89] found id: ""
	I1208 01:59:27.069007 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.069015 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:27.069021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:27.069086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.097058 1136586 cri.go:89] found id: ""
	I1208 01:59:27.097086 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.097095 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.097105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.097168 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.127221 1136586 cri.go:89] found id: ""
	I1208 01:59:27.127245 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.127253 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.127260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.127318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.152834 1136586 cri.go:89] found id: ""
	I1208 01:59:27.152859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.152869 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.152875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.152942 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.185563 1136586 cri.go:89] found id: ""
	I1208 01:59:27.185591 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.185600 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.185606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.185667 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.213022 1136586 cri.go:89] found id: ""
	I1208 01:59:27.213099 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.213125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.213147 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.213183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.272193 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.272229 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.289811 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.289892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.364663 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:27.364695 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:27.364720 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:27.392211 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.392286 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:29.931677 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.942629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.942709 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.971856 1136586 cri.go:89] found id: ""
	I1208 01:59:29.971882 1136586 logs.go:282] 0 containers: []
	W1208 01:59:29.971891 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.971898 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.971958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:30.000222 1136586 cri.go:89] found id: ""
	I1208 01:59:30.000248 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.000258 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:30.000265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:30.000330 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:30.039259 1136586 cri.go:89] found id: ""
	I1208 01:59:30.039285 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.039295 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:30.039301 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:30.039370 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:30.096203 1136586 cri.go:89] found id: ""
	I1208 01:59:30.096247 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.096258 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:30.096265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:30.096348 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.125007 1136586 cri.go:89] found id: ""
	I1208 01:59:30.125034 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.125044 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.125051 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.125138 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.155888 1136586 cri.go:89] found id: ""
	I1208 01:59:30.155914 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.155924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.155931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.155996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.183068 1136586 cri.go:89] found id: ""
	I1208 01:59:30.183104 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.183114 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.183121 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.183186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.211552 1136586 cri.go:89] found id: ""
	I1208 01:59:30.211577 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.211585 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.211601 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:30.211613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:30.238738 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.238789 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.272245 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.272275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.331871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.331909 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:30.349711 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.349742 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.428964 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:32.929192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.940100 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.940183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.963581 1136586 cri.go:89] found id: ""
	I1208 01:59:32.963602 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.963611 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.963617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.963678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.992028 1136586 cri.go:89] found id: ""
	I1208 01:59:32.992054 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.992063 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.992069 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.992130 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:33.023809 1136586 cri.go:89] found id: ""
	I1208 01:59:33.023836 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.023846 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:33.023852 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:33.023919 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:33.048510 1136586 cri.go:89] found id: ""
	I1208 01:59:33.048533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.048541 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:33.048548 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:33.048608 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:33.075068 1136586 cri.go:89] found id: ""
	I1208 01:59:33.075096 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.075106 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:33.075113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:33.075173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.099238 1136586 cri.go:89] found id: ""
	I1208 01:59:33.099264 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.099273 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.099280 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.099345 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.123805 1136586 cri.go:89] found id: ""
	I1208 01:59:33.123831 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.123840 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.123846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.123905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.152142 1136586 cri.go:89] found id: ""
	I1208 01:59:33.152166 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.152175 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.152184 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.152195 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:33.210457 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.210492 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.225387 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.225415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.288797 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.288820 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:33.288834 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:33.314642 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.314675 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:35.847043 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.865523 1136586 out.go:203] 
	W1208 01:59:35.868530 1136586 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 01:59:35.868757 1136586 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 01:59:35.868776 1136586 out.go:285] * Related issues:
	W1208 01:59:35.868792 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1208 01:59:35.868833 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1208 01:59:35.873508 1136586 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786139868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786216570Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786327677Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786398012Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786492035Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786558530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786618051Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786677259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786750130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786832108Z" level=info msg="Connect containerd service"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.787154187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.787806804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801520989Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801594475Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801680802Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801735276Z" level=info msg="Start recovering state"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842441332Z" level=info msg="Start event monitor"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842660328Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842725527Z" level=info msg="Start streaming server"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842808506Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842872334Z" level=info msg="runtime interface starting up..."
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842928778Z" level=info msg="starting plugins..."
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.843007934Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:53:32 newest-cni-457779 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.845003152Z" level=info msg="containerd successfully booted in 0.084434s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:39.121795   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.122207   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.123754   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.124093   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:39.125596   13401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:59:39 up  6:42,  0 user,  load average: 1.10, 0.85, 1.23
	Linux newest-cni-457779 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:59:35 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:36 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 08 01:59:36 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:36 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:36 newest-cni-457779 kubelet[13280]: E1208 01:59:36.433249   13280 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:36 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:36 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:37 newest-cni-457779 kubelet[13286]: E1208 01:59:37.171001   13286 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:37 newest-cni-457779 kubelet[13306]: E1208 01:59:37.902777   13306 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:37 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:38 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 08 01:59:38 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:38 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:38 newest-cni-457779 kubelet[13311]: E1208 01:59:38.655925   13311 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:38 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:38 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (356.459109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-457779" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (374.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:54:25.030957  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:54:30.128646  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:56:28.221312  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:56:59.314511  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:57:12.528704  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:58:35.601061  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:59:13.211981  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:59:25.031525  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 01:59:30.128400  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:00:48.106613  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1208 02:01:11.526261  846711 config.go:182] Loaded profile config "auto-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:01:28.221481  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:01:59.314403  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:02:12.527625  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 2 (334.770427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1128684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:47:25.421292194Z",
	            "FinishedAt": "2025-12-08T01:47:24.077520836Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "508635803fd26385f5b74c49f258f541cf3f3701572a3e277063698fd55748b0",
	            "SandboxKey": "/var/run/docker/netns/508635803fd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b7:e8:6e:2b:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "662425aa0da883d43861485458a7d96ef656064827e7d2e8fc052d0ab70deda4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 2 (338.310299ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-475514 sudo systemctl status kubelet --all --full --no-pager                                                                           │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo systemctl cat kubelet --no-pager                                                                                           │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo journalctl -xeu kubelet --all --full --no-pager                                                                            │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cat /etc/kubernetes/kubelet.conf                                                                                           │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cat /var/lib/kubelet/config.yaml                                                                                           │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo systemctl status docker --all --full --no-pager                                                                            │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	│ ssh     │ -p auto-475514 sudo systemctl cat docker --no-pager                                                                                            │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cat /etc/docker/daemon.json                                                                                                │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	│ ssh     │ -p auto-475514 sudo docker system info                                                                                                         │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	│ ssh     │ -p auto-475514 sudo systemctl status cri-docker --all --full --no-pager                                                                        │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	│ ssh     │ -p auto-475514 sudo systemctl cat cri-docker --no-pager                                                                                        │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                   │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	│ ssh     │ -p auto-475514 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                             │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cri-dockerd --version                                                                                                      │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo systemctl status containerd --all --full --no-pager                                                                        │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo systemctl cat containerd --no-pager                                                                                        │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cat /lib/systemd/system/containerd.service                                                                                 │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo cat /etc/containerd/config.toml                                                                                            │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo containerd config dump                                                                                                     │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo systemctl status crio --all --full --no-pager                                                                              │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	│ ssh     │ -p auto-475514 sudo systemctl cat crio --no-pager                                                                                              │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                    │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ ssh     │ -p auto-475514 sudo crio config                                                                                                                │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ delete  │ -p auto-475514                                                                                                                                 │ auto-475514    │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │ 08 Dec 25 02:01 UTC │
	│ start   │ -p kindnet-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd │ kindnet-475514 │ jenkins │ v1.37.0 │ 08 Dec 25 02:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 02:01:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 02:01:41.410420 1161943 out.go:360] Setting OutFile to fd 1 ...
	I1208 02:01:41.410790 1161943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:01:41.410822 1161943 out.go:374] Setting ErrFile to fd 2...
	I1208 02:01:41.410846 1161943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:01:41.411450 1161943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 02:01:41.411941 1161943 out.go:368] Setting JSON to false
	I1208 02:01:41.412837 1161943 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":24254,"bootTime":1765135047,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 02:01:41.412940 1161943 start.go:143] virtualization:  
	I1208 02:01:41.416605 1161943 out.go:179] * [kindnet-475514] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 02:01:41.421230 1161943 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 02:01:41.421380 1161943 notify.go:221] Checking for updates...
	I1208 02:01:41.428132 1161943 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 02:01:41.431367 1161943 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 02:01:41.434561 1161943 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 02:01:41.437764 1161943 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 02:01:41.440921 1161943 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 02:01:41.444589 1161943 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 02:01:41.444719 1161943 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 02:01:41.467352 1161943 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 02:01:41.467472 1161943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:01:41.525387 1161943 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:01:41.51584457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:01:41.525500 1161943 docker.go:319] overlay module found
	I1208 02:01:41.528839 1161943 out.go:179] * Using the docker driver based on user configuration
	I1208 02:01:41.531869 1161943 start.go:309] selected driver: docker
	I1208 02:01:41.531894 1161943 start.go:927] validating driver "docker" against <nil>
	I1208 02:01:41.531909 1161943 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 02:01:41.532667 1161943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:01:41.598934 1161943 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:01:41.589688877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:01:41.599079 1161943 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 02:01:41.599314 1161943 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 02:01:41.602437 1161943 out.go:179] * Using Docker driver with root privileges
	I1208 02:01:41.605485 1161943 cni.go:84] Creating CNI manager for "kindnet"
	I1208 02:01:41.605526 1161943 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 02:01:41.605615 1161943 start.go:353] cluster config:
	{Name:kindnet-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-475514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:01:41.610714 1161943 out.go:179] * Starting "kindnet-475514" primary control-plane node in "kindnet-475514" cluster
	I1208 02:01:41.613643 1161943 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 02:01:41.616677 1161943 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 02:01:41.619585 1161943 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 02:01:41.619677 1161943 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 02:01:41.619704 1161943 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1208 02:01:41.619712 1161943 cache.go:65] Caching tarball of preloaded images
	I1208 02:01:41.619793 1161943 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 02:01:41.619803 1161943 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1208 02:01:41.619905 1161943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/config.json ...
	I1208 02:01:41.619928 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/config.json: {Name:mkadf104c7d1f2c1b0195a3ef3a56eecb3968912 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:41.639021 1161943 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 02:01:41.639045 1161943 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 02:01:41.639060 1161943 cache.go:243] Successfully downloaded all kic artifacts
	I1208 02:01:41.639090 1161943 start.go:360] acquireMachinesLock for kindnet-475514: {Name:mk24da8c2bb0b1b62b3fb0068288370ba2be6741 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 02:01:41.639196 1161943 start.go:364] duration metric: took 84.628µs to acquireMachinesLock for "kindnet-475514"
	I1208 02:01:41.639227 1161943 start.go:93] Provisioning new machine with config: &{Name:kindnet-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-475514 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 02:01:41.639310 1161943 start.go:125] createHost starting for "" (driver="docker")
	I1208 02:01:41.642735 1161943 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 02:01:41.643058 1161943 start.go:159] libmachine.API.Create for "kindnet-475514" (driver="docker")
	I1208 02:01:41.643101 1161943 client.go:173] LocalClient.Create starting
	I1208 02:01:41.643201 1161943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 02:01:41.643247 1161943 main.go:143] libmachine: Decoding PEM data...
	I1208 02:01:41.643283 1161943 main.go:143] libmachine: Parsing certificate...
	I1208 02:01:41.643358 1161943 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 02:01:41.643380 1161943 main.go:143] libmachine: Decoding PEM data...
	I1208 02:01:41.643399 1161943 main.go:143] libmachine: Parsing certificate...
	I1208 02:01:41.643867 1161943 cli_runner.go:164] Run: docker network inspect kindnet-475514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 02:01:41.661535 1161943 cli_runner.go:211] docker network inspect kindnet-475514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 02:01:41.661632 1161943 network_create.go:284] running [docker network inspect kindnet-475514] to gather additional debugging logs...
	I1208 02:01:41.661662 1161943 cli_runner.go:164] Run: docker network inspect kindnet-475514
	W1208 02:01:41.678639 1161943 cli_runner.go:211] docker network inspect kindnet-475514 returned with exit code 1
	I1208 02:01:41.678674 1161943 network_create.go:287] error running [docker network inspect kindnet-475514]: docker network inspect kindnet-475514: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-475514 not found
	I1208 02:01:41.678688 1161943 network_create.go:289] output of [docker network inspect kindnet-475514]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-475514 not found
	
	** /stderr **
	I1208 02:01:41.678822 1161943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:01:41.696106 1161943 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 02:01:41.696549 1161943 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 02:01:41.696972 1161943 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 02:01:41.697477 1161943 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ffbb0}
	I1208 02:01:41.697499 1161943 network_create.go:124] attempt to create docker network kindnet-475514 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 02:01:41.697562 1161943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-475514 kindnet-475514
	I1208 02:01:41.760042 1161943 network_create.go:108] docker network kindnet-475514 192.168.76.0/24 created
	I1208 02:01:41.760079 1161943 kic.go:121] calculated static IP "192.168.76.2" for the "kindnet-475514" container
	I1208 02:01:41.760154 1161943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 02:01:41.776448 1161943 cli_runner.go:164] Run: docker volume create kindnet-475514 --label name.minikube.sigs.k8s.io=kindnet-475514 --label created_by.minikube.sigs.k8s.io=true
	I1208 02:01:41.794970 1161943 oci.go:103] Successfully created a docker volume kindnet-475514
	I1208 02:01:41.795063 1161943 cli_runner.go:164] Run: docker run --rm --name kindnet-475514-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-475514 --entrypoint /usr/bin/test -v kindnet-475514:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 02:01:42.392819 1161943 oci.go:107] Successfully prepared a docker volume kindnet-475514
	I1208 02:01:42.392881 1161943 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 02:01:42.392906 1161943 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 02:01:42.392982 1161943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-475514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 02:01:46.389439 1161943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v kindnet-475514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.996417229s)
	I1208 02:01:46.389472 1161943 kic.go:203] duration metric: took 3.996562724s to extract preloaded images to volume ...
	W1208 02:01:46.389630 1161943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 02:01:46.389751 1161943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 02:01:46.460220 1161943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-475514 --name kindnet-475514 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-475514 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-475514 --network kindnet-475514 --ip 192.168.76.2 --volume kindnet-475514:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 02:01:46.774644 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Running}}
	I1208 02:01:46.798835 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Status}}
	I1208 02:01:46.820503 1161943 cli_runner.go:164] Run: docker exec kindnet-475514 stat /var/lib/dpkg/alternatives/iptables
	I1208 02:01:46.868948 1161943 oci.go:144] the created container "kindnet-475514" has a running status.
	I1208 02:01:46.868977 1161943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa...
	I1208 02:01:47.224650 1161943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 02:01:47.254659 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Status}}
	I1208 02:01:47.280186 1161943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 02:01:47.280210 1161943 kic_runner.go:114] Args: [docker exec --privileged kindnet-475514 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 02:01:47.338834 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Status}}
	I1208 02:01:47.365322 1161943 machine.go:94] provisionDockerMachine start ...
	I1208 02:01:47.365407 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:47.397277 1161943 main.go:143] libmachine: Using SSH client type: native
	I1208 02:01:47.397619 1161943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33883 <nil> <nil>}
	I1208 02:01:47.397634 1161943 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 02:01:47.398263 1161943 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59270->127.0.0.1:33883: read: connection reset by peer
	I1208 02:01:50.554314 1161943 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-475514
	
	I1208 02:01:50.554342 1161943 ubuntu.go:182] provisioning hostname "kindnet-475514"
	I1208 02:01:50.554475 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:50.572407 1161943 main.go:143] libmachine: Using SSH client type: native
	I1208 02:01:50.572723 1161943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33883 <nil> <nil>}
	I1208 02:01:50.572734 1161943 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-475514 && echo "kindnet-475514" | sudo tee /etc/hostname
	I1208 02:01:50.731918 1161943 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-475514
	
	I1208 02:01:50.732004 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:50.749415 1161943 main.go:143] libmachine: Using SSH client type: native
	I1208 02:01:50.749743 1161943 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33883 <nil> <nil>}
	I1208 02:01:50.749768 1161943 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-475514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-475514/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-475514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 02:01:50.906655 1161943 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 02:01:50.906739 1161943 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 02:01:50.906784 1161943 ubuntu.go:190] setting up certificates
	I1208 02:01:50.906812 1161943 provision.go:84] configureAuth start
	I1208 02:01:50.906895 1161943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-475514
	I1208 02:01:50.923252 1161943 provision.go:143] copyHostCerts
	I1208 02:01:50.923330 1161943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 02:01:50.923346 1161943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 02:01:50.923429 1161943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 02:01:50.923539 1161943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 02:01:50.923551 1161943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 02:01:50.923579 1161943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 02:01:50.923647 1161943 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 02:01:50.923656 1161943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 02:01:50.923681 1161943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 02:01:50.923745 1161943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.kindnet-475514 san=[127.0.0.1 192.168.76.2 kindnet-475514 localhost minikube]
	I1208 02:01:51.354955 1161943 provision.go:177] copyRemoteCerts
	I1208 02:01:51.355025 1161943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 02:01:51.355072 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:51.372183 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:01:51.478810 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 02:01:51.496989 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1208 02:01:51.515817 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 02:01:51.533926 1161943 provision.go:87] duration metric: took 627.081359ms to configureAuth
	I1208 02:01:51.533966 1161943 ubuntu.go:206] setting minikube options for container-runtime
	I1208 02:01:51.534153 1161943 config.go:182] Loaded profile config "kindnet-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 02:01:51.534160 1161943 machine.go:97] duration metric: took 4.16882131s to provisionDockerMachine
	I1208 02:01:51.534167 1161943 client.go:176] duration metric: took 9.891051232s to LocalClient.Create
	I1208 02:01:51.534181 1161943 start.go:167] duration metric: took 9.891126539s to libmachine.API.Create "kindnet-475514"
	I1208 02:01:51.534188 1161943 start.go:293] postStartSetup for "kindnet-475514" (driver="docker")
	I1208 02:01:51.534197 1161943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 02:01:51.534249 1161943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 02:01:51.534287 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:51.551209 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:01:51.658650 1161943 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 02:01:51.662296 1161943 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 02:01:51.662328 1161943 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 02:01:51.662348 1161943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 02:01:51.662403 1161943 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 02:01:51.662524 1161943 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 02:01:51.662642 1161943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 02:01:51.669965 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 02:01:51.688097 1161943 start.go:296] duration metric: took 153.894038ms for postStartSetup
	I1208 02:01:51.688477 1161943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-475514
	I1208 02:01:51.705606 1161943 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/config.json ...
	I1208 02:01:51.705892 1161943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 02:01:51.705952 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:51.723507 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:01:51.827277 1161943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 02:01:51.832103 1161943 start.go:128] duration metric: took 10.19277695s to createHost
	I1208 02:01:51.832133 1161943 start.go:83] releasing machines lock for "kindnet-475514", held for 10.192918399s
	I1208 02:01:51.832207 1161943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-475514
	I1208 02:01:51.848674 1161943 ssh_runner.go:195] Run: cat /version.json
	I1208 02:01:51.848730 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:51.848749 1161943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 02:01:51.848802 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:01:51.871882 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:01:51.874586 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:01:52.072681 1161943 ssh_runner.go:195] Run: systemctl --version
	I1208 02:01:52.079368 1161943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 02:01:52.083741 1161943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 02:01:52.083829 1161943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 02:01:52.111286 1161943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 02:01:52.111312 1161943 start.go:496] detecting cgroup driver to use...
	I1208 02:01:52.111347 1161943 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 02:01:52.111412 1161943 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 02:01:52.128471 1161943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 02:01:52.142515 1161943 docker.go:218] disabling cri-docker service (if available) ...
	I1208 02:01:52.142582 1161943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 02:01:52.160626 1161943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 02:01:52.179139 1161943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 02:01:52.297940 1161943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 02:01:52.446626 1161943 docker.go:234] disabling docker service ...
	I1208 02:01:52.446747 1161943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 02:01:52.468305 1161943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 02:01:52.482023 1161943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 02:01:52.604102 1161943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 02:01:52.728174 1161943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 02:01:52.742267 1161943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 02:01:52.756936 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 02:01:52.766631 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 02:01:52.776267 1161943 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 02:01:52.776353 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 02:01:52.785295 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 02:01:52.794830 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 02:01:52.803694 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 02:01:52.812693 1161943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 02:01:52.821007 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 02:01:52.829873 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 02:01:52.838803 1161943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 02:01:52.847555 1161943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 02:01:52.855258 1161943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 02:01:52.862772 1161943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:01:52.970742 1161943 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 02:01:53.105167 1161943 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 02:01:53.105316 1161943 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 02:01:53.110253 1161943 start.go:564] Will wait 60s for crictl version
	I1208 02:01:53.110372 1161943 ssh_runner.go:195] Run: which crictl
	I1208 02:01:53.114874 1161943 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 02:01:53.142989 1161943 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 02:01:53.143141 1161943 ssh_runner.go:195] Run: containerd --version
	I1208 02:01:53.167607 1161943 ssh_runner.go:195] Run: containerd --version
	I1208 02:01:53.193744 1161943 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1208 02:01:53.196791 1161943 cli_runner.go:164] Run: docker network inspect kindnet-475514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:01:53.213031 1161943 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 02:01:53.216803 1161943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:01:53.226774 1161943 kubeadm.go:884] updating cluster {Name:kindnet-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-475514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 02:01:53.226902 1161943 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 02:01:53.226969 1161943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:01:53.252606 1161943 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 02:01:53.252635 1161943 containerd.go:534] Images already preloaded, skipping extraction
	I1208 02:01:53.252699 1161943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:01:53.278114 1161943 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 02:01:53.278141 1161943 cache_images.go:86] Images are preloaded, skipping loading
	I1208 02:01:53.278149 1161943 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1208 02:01:53.278248 1161943 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-475514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-475514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1208 02:01:53.278330 1161943 ssh_runner.go:195] Run: sudo crictl info
	I1208 02:01:53.304474 1161943 cni.go:84] Creating CNI manager for "kindnet"
	I1208 02:01:53.304508 1161943 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 02:01:53.304565 1161943 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-475514 NodeName:kindnet-475514 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 02:01:53.304709 1161943 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-475514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 02:01:53.304784 1161943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 02:01:53.313207 1161943 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 02:01:53.313275 1161943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 02:01:53.321118 1161943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1208 02:01:53.334249 1161943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 02:01:53.347919 1161943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1208 02:01:53.361548 1161943 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 02:01:53.365423 1161943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:01:53.375183 1161943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:01:53.489532 1161943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 02:01:53.514730 1161943 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514 for IP: 192.168.76.2
	I1208 02:01:53.514752 1161943 certs.go:195] generating shared ca certs ...
	I1208 02:01:53.514768 1161943 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:53.514907 1161943 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 02:01:53.514954 1161943 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 02:01:53.514966 1161943 certs.go:257] generating profile certs ...
	I1208 02:01:53.515025 1161943 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.key
	I1208 02:01:53.515044 1161943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt with IP's: []
	I1208 02:01:53.571690 1161943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt ...
	I1208 02:01:53.571724 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: {Name:mk2929a20f6d15a537613db7527b4b57c29c5132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:53.571951 1161943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.key ...
	I1208 02:01:53.571966 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.key: {Name:mka7ac5efa5a67f2d17ab8dad1f24d78b396e13b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:53.572073 1161943 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.key.a65b0ca4
	I1208 02:01:53.572094 1161943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.crt.a65b0ca4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 02:01:53.946475 1161943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.crt.a65b0ca4 ...
	I1208 02:01:53.946507 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.crt.a65b0ca4: {Name:mk17e5f1f3b3d0d3659ded0dd450fcaf048fb30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:53.946696 1161943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.key.a65b0ca4 ...
	I1208 02:01:53.946711 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.key.a65b0ca4: {Name:mkdc664fbdc6065ed9a3e74556be6880b24b5fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:53.946799 1161943 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.crt.a65b0ca4 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.crt
	I1208 02:01:53.946877 1161943 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.key.a65b0ca4 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.key
	I1208 02:01:53.946942 1161943 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.key
	I1208 02:01:53.946961 1161943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.crt with IP's: []
	I1208 02:01:54.089455 1161943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.crt ...
	I1208 02:01:54.089494 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.crt: {Name:mk7aa7fb47492a2e8874d999e0d365d7642f1b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:54.089705 1161943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.key ...
	I1208 02:01:54.089725 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.key: {Name:mkcbc8e88bf9faf49b140fddb10e8065da80cec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:01:54.089926 1161943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 02:01:54.089972 1161943 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 02:01:54.089986 1161943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 02:01:54.090016 1161943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 02:01:54.090044 1161943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 02:01:54.090071 1161943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 02:01:54.090121 1161943 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 02:01:54.090786 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 02:01:54.111021 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 02:01:54.129730 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 02:01:54.147924 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 02:01:54.166817 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 02:01:54.185895 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1208 02:01:54.203836 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 02:01:54.222195 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 02:01:54.239782 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 02:01:54.259251 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 02:01:54.277000 1161943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 02:01:54.294860 1161943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 02:01:54.307929 1161943 ssh_runner.go:195] Run: openssl version
	I1208 02:01:54.314433 1161943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:01:54.321912 1161943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 02:01:54.329298 1161943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:01:54.333113 1161943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:01:54.333222 1161943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:01:54.374930 1161943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 02:01:54.382627 1161943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 02:01:54.390091 1161943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 02:01:54.397753 1161943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 02:01:54.405489 1161943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 02:01:54.409652 1161943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 02:01:54.409746 1161943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 02:01:54.452331 1161943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 02:01:54.460652 1161943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 02:01:54.468266 1161943 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 02:01:54.476105 1161943 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 02:01:54.483727 1161943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 02:01:54.487439 1161943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 02:01:54.487508 1161943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 02:01:54.528717 1161943 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 02:01:54.536824 1161943 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 02:01:54.544690 1161943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 02:01:54.548509 1161943 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 02:01:54.548572 1161943 kubeadm.go:401] StartCluster: {Name:kindnet-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-475514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:01:54.548657 1161943 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 02:01:54.548726 1161943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 02:01:54.576458 1161943 cri.go:89] found id: ""
	I1208 02:01:54.576561 1161943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 02:01:54.584574 1161943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 02:01:54.593422 1161943 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 02:01:54.593504 1161943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 02:01:54.605566 1161943 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 02:01:54.605594 1161943 kubeadm.go:158] found existing configuration files:
	
	I1208 02:01:54.605645 1161943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 02:01:54.614707 1161943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 02:01:54.614799 1161943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 02:01:54.623819 1161943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 02:01:54.635658 1161943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 02:01:54.635788 1161943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 02:01:54.648196 1161943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 02:01:54.659982 1161943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 02:01:54.660129 1161943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 02:01:54.672378 1161943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 02:01:54.683904 1161943 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 02:01:54.684000 1161943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 02:01:54.691704 1161943 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 02:01:54.735727 1161943 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 02:01:54.735804 1161943 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 02:01:54.773816 1161943 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 02:01:54.773893 1161943 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 02:01:54.773934 1161943 kubeadm.go:319] OS: Linux
	I1208 02:01:54.773985 1161943 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 02:01:54.774040 1161943 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 02:01:54.774094 1161943 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 02:01:54.774146 1161943 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 02:01:54.774198 1161943 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 02:01:54.774250 1161943 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 02:01:54.774298 1161943 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 02:01:54.774351 1161943 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 02:01:54.774412 1161943 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 02:01:54.845870 1161943 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 02:01:54.846018 1161943 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 02:01:54.846140 1161943 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 02:01:54.852860 1161943 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 02:01:54.859629 1161943 out.go:252]   - Generating certificates and keys ...
	I1208 02:01:54.859805 1161943 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 02:01:54.859924 1161943 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 02:01:55.085097 1161943 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 02:01:55.873693 1161943 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 02:01:56.673017 1161943 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 02:01:58.494821 1161943 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 02:01:59.356362 1161943 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 02:01:59.356515 1161943 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-475514 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 02:01:59.537565 1161943 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 02:01:59.537850 1161943 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-475514 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 02:01:59.862792 1161943 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 02:02:00.014168 1161943 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 02:02:00.600540 1161943 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 02:02:00.600615 1161943 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 02:02:00.924466 1161943 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 02:02:01.690232 1161943 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 02:02:01.764578 1161943 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 02:02:01.940434 1161943 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 02:02:02.271962 1161943 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 02:02:02.272784 1161943 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 02:02:02.275740 1161943 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 02:02:02.279159 1161943 out.go:252]   - Booting up control plane ...
	I1208 02:02:02.279321 1161943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 02:02:02.279441 1161943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 02:02:02.280896 1161943 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 02:02:02.298907 1161943 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 02:02:02.299112 1161943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 02:02:02.307634 1161943 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 02:02:02.308310 1161943 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 02:02:02.308609 1161943 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 02:02:02.443287 1161943 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 02:02:02.443412 1161943 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 02:02:03.944622 1161943 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50143367s
	I1208 02:02:03.948288 1161943 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 02:02:03.948384 1161943 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1208 02:02:03.948483 1161943 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 02:02:03.948774 1161943 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 02:02:07.245783 1161943 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.297104983s
	I1208 02:02:09.818349 1161943 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.870045109s
	I1208 02:02:11.450572 1161943 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.502191746s
	I1208 02:02:11.488168 1161943 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 02:02:11.500846 1161943 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 02:02:11.522141 1161943 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 02:02:11.522357 1161943 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-475514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 02:02:11.536845 1161943 kubeadm.go:319] [bootstrap-token] Using token: 1jj8gg.b7zneyyedzt0g1yb
	I1208 02:02:11.539843 1161943 out.go:252]   - Configuring RBAC rules ...
	I1208 02:02:11.539978 1161943 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 02:02:11.544792 1161943 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 02:02:11.555162 1161943 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 02:02:11.559492 1161943 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 02:02:11.564133 1161943 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 02:02:11.568188 1161943 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 02:02:11.861258 1161943 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 02:02:12.335176 1161943 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 02:02:12.860814 1161943 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 02:02:12.862002 1161943 kubeadm.go:319] 
	I1208 02:02:12.862079 1161943 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 02:02:12.862091 1161943 kubeadm.go:319] 
	I1208 02:02:12.862171 1161943 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 02:02:12.862177 1161943 kubeadm.go:319] 
	I1208 02:02:12.862201 1161943 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 02:02:12.862257 1161943 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 02:02:12.862310 1161943 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 02:02:12.862315 1161943 kubeadm.go:319] 
	I1208 02:02:12.862366 1161943 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 02:02:12.862378 1161943 kubeadm.go:319] 
	I1208 02:02:12.862423 1161943 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 02:02:12.862430 1161943 kubeadm.go:319] 
	I1208 02:02:12.862507 1161943 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 02:02:12.862583 1161943 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 02:02:12.862658 1161943 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 02:02:12.862666 1161943 kubeadm.go:319] 
	I1208 02:02:12.862745 1161943 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 02:02:12.862821 1161943 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 02:02:12.862829 1161943 kubeadm.go:319] 
	I1208 02:02:12.862908 1161943 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 1jj8gg.b7zneyyedzt0g1yb \
	I1208 02:02:12.863008 1161943 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:530bb87f6b7bce2c34e587eb3e9dbf3b51e460a8d6fb3f3266be1a74dec16e58 \
	I1208 02:02:12.863032 1161943 kubeadm.go:319] 	--control-plane 
	I1208 02:02:12.863040 1161943 kubeadm.go:319] 
	I1208 02:02:12.863120 1161943 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 02:02:12.863129 1161943 kubeadm.go:319] 
	I1208 02:02:12.863206 1161943 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 1jj8gg.b7zneyyedzt0g1yb \
	I1208 02:02:12.863312 1161943 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:530bb87f6b7bce2c34e587eb3e9dbf3b51e460a8d6fb3f3266be1a74dec16e58 
	I1208 02:02:12.867951 1161943 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1208 02:02:12.868170 1161943 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1208 02:02:12.868274 1161943 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 02:02:12.868292 1161943 cni.go:84] Creating CNI manager for "kindnet"
	I1208 02:02:12.871429 1161943 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1208 02:02:12.874496 1161943 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1208 02:02:12.879146 1161943 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1208 02:02:12.879166 1161943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1208 02:02:12.893124 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1208 02:02:13.195397 1161943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 02:02:13.195484 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:13.195566 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-475514 minikube.k8s.io/updated_at=2025_12_08T02_02_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=f5cff42f65f8043a145b28acc2164a21aaf35c47 minikube.k8s.io/name=kindnet-475514 minikube.k8s.io/primary=true
	I1208 02:02:13.450480 1161943 ops.go:34] apiserver oom_adj: -16
	I1208 02:02:13.450502 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:13.951062 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:14.451226 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:14.950979 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:15.450727 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:15.951474 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:16.451491 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:16.951541 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:17.451172 1161943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 02:02:17.634987 1161943 kubeadm.go:1114] duration metric: took 4.439595162s to wait for elevateKubeSystemPrivileges
	I1208 02:02:17.635019 1161943 kubeadm.go:403] duration metric: took 23.08645204s to StartCluster
	I1208 02:02:17.635049 1161943 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:02:17.635118 1161943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 02:02:17.636246 1161943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:02:17.636516 1161943 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 02:02:17.636639 1161943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 02:02:17.636932 1161943 config.go:182] Loaded profile config "kindnet-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 02:02:17.636987 1161943 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 02:02:17.637074 1161943 addons.go:70] Setting storage-provisioner=true in profile "kindnet-475514"
	I1208 02:02:17.637104 1161943 addons.go:239] Setting addon storage-provisioner=true in "kindnet-475514"
	I1208 02:02:17.637135 1161943 host.go:66] Checking if "kindnet-475514" exists ...
	I1208 02:02:17.637676 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Status}}
	I1208 02:02:17.638232 1161943 addons.go:70] Setting default-storageclass=true in profile "kindnet-475514"
	I1208 02:02:17.638253 1161943 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-475514"
	I1208 02:02:17.638608 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Status}}
	I1208 02:02:17.640439 1161943 out.go:179] * Verifying Kubernetes components...
	I1208 02:02:17.644747 1161943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:02:17.689176 1161943 addons.go:239] Setting addon default-storageclass=true in "kindnet-475514"
	I1208 02:02:17.689215 1161943 host.go:66] Checking if "kindnet-475514" exists ...
	I1208 02:02:17.689678 1161943 cli_runner.go:164] Run: docker container inspect kindnet-475514 --format={{.State.Status}}
	I1208 02:02:17.697957 1161943 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 02:02:17.700969 1161943 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 02:02:17.700995 1161943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 02:02:17.701064 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:02:17.728749 1161943 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 02:02:17.728783 1161943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 02:02:17.728860 1161943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-475514
	I1208 02:02:17.757054 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:02:17.769738 1161943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/kindnet-475514/id_rsa Username:docker}
	I1208 02:02:17.907569 1161943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 02:02:18.032039 1161943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 02:02:18.123168 1161943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 02:02:18.229436 1161943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 02:02:18.618698 1161943 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1208 02:02:18.621422 1161943 node_ready.go:35] waiting up to 15m0s for node "kindnet-475514" to be "Ready" ...
	I1208 02:02:19.038676 1161943 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1208 02:02:19.041491 1161943 addons.go:530] duration metric: took 1.404496607s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1208 02:02:19.123271 1161943 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-475514" context rescaled to 1 replicas
	W1208 02:02:20.624855 1161943 node_ready.go:57] node "kindnet-475514" has "Ready":"False" status (will retry)
	W1208 02:02:22.630618 1161943 node_ready.go:57] node "kindnet-475514" has "Ready":"False" status (will retry)
	W1208 02:02:25.124187 1161943 node_ready.go:57] node "kindnet-475514" has "Ready":"False" status (will retry)
	W1208 02:02:27.125316 1161943 node_ready.go:57] node "kindnet-475514" has "Ready":"False" status (will retry)
	W1208 02:02:29.125645 1161943 node_ready.go:57] node "kindnet-475514" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012707347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012722928Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012784722Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012807458Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012980408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012995694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013007829Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013026414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013056306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013096109Z" level=info msg="Connect containerd service"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013397585Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.014248932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.024764086Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.024952617Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.025010275Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.025073168Z" level=info msg="Start recovering state"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046219867Z" level=info msg="Start event monitor"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046299482Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046310116Z" level=info msg="Start streaming server"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046320315Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046329185Z" level=info msg="runtime interface starting up..."
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046337029Z" level=info msg="starting plugins..."
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046369292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:47:31 no-preload-536520 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.048165739Z" level=info msg="containerd successfully booted in 0.067149s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:02:36.653302    8187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.654133    8187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.655792    8187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.656195    8187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:02:36.657861    8187 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:02:36 up  6:45,  0 user,  load average: 1.89, 1.36, 1.36
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:02:33 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:34 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1201.
	Dec 08 02:02:34 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:34 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:34 no-preload-536520 kubelet[8051]: E1208 02:02:34.402502    8051 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:34 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:34 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:35 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 08 02:02:35 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:35 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:35 no-preload-536520 kubelet[8057]: E1208 02:02:35.142240    8057 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:35 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:35 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:35 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 08 02:02:35 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:35 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:35 no-preload-536520 kubelet[8085]: E1208 02:02:35.897131    8085 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:35 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:35 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:02:36 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 08 02:02:36 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:36 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:02:36 no-preload-536520 kubelet[8188]: E1208 02:02:36.641862    8188 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:02:36 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:02:36 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 2 (350.810847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-457779 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (316.254116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-457779 -n newest-cni-457779
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (350.321862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-457779 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (321.729326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-457779 -n newest-cni-457779
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (318.142133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-457779
helpers_test.go:243: (dbg) docker inspect newest-cni-457779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	        "Created": "2025-12-08T01:43:39.768991386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1136714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:53:27.037311302Z",
	            "FinishedAt": "2025-12-08T01:53:25.665351923Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hostname",
	        "HostsPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hosts",
	        "LogPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515-json.log",
	        "Name": "/newest-cni-457779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-457779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-457779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	                "LowerDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-457779",
	                "Source": "/var/lib/docker/volumes/newest-cni-457779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-457779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-457779",
	                "name.minikube.sigs.k8s.io": "newest-cni-457779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1a947731c9f343bfc621f32c5e5e6b87b4d6596e40159c82f35b05d4b004c86",
	            "SandboxKey": "/var/run/docker/netns/a1a947731c9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-457779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d0:aa:7b:8e:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e759035a3431798f7b6fae1fcd872afa7240c356fb1da4c53589714768a6edc3",
	                    "EndpointID": "88ca36c415275c64fba1e1779bb8c75173dfd0b7a6e82aa393b48ff675c0db50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-457779",
	                        "638bfd2d42fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (347.72777ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25: (1.788892439s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p no-preload-536520 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ start   │ -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:51 UTC │                     │
	│ stop    │ -p newest-cni-457779 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-457779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │                     │
	│ image   │ newest-cni-457779 image list --format=json                                                                                                                                                                                                                 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:59 UTC │ 08 Dec 25 01:59 UTC │
	│ pause   │ -p newest-cni-457779 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:59 UTC │ 08 Dec 25 01:59 UTC │
	│ unpause │ -p newest-cni-457779 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:59 UTC │ 08 Dec 25 01:59 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:53:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:53:26.756000 1136586 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:53:26.756538 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756548 1136586 out.go:374] Setting ErrFile to fd 2...
	I1208 01:53:26.756553 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756842 1136586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:53:26.757268 1136586 out.go:368] Setting JSON to false
	I1208 01:53:26.758219 1136586 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23760,"bootTime":1765135047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:53:26.758285 1136586 start.go:143] virtualization:  
	I1208 01:53:26.761027 1136586 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:53:26.763300 1136586 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:53:26.763385 1136586 notify.go:221] Checking for updates...
	I1208 01:53:26.769236 1136586 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:53:26.772301 1136586 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:26.775351 1136586 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:53:26.778370 1136586 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:53:26.781331 1136586 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:53:26.784939 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:26.785587 1136586 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:53:26.821497 1136586 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:53:26.821612 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.884858 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.874574541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.884969 1136586 docker.go:319] overlay module found
	I1208 01:53:26.888166 1136586 out.go:179] * Using the docker driver based on existing profile
	I1208 01:53:26.891132 1136586 start.go:309] selected driver: docker
	I1208 01:53:26.891162 1136586 start.go:927] validating driver "docker" against &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.891271 1136586 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:53:26.892009 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.946578 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.937487208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.946934 1136586 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:53:26.946970 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:26.947032 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:26.947088 1136586 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.951997 1136586 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:53:26.954840 1136586 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:53:26.957745 1136586 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:53:26.960653 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:26.960709 1136586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:53:26.960722 1136586 cache.go:65] Caching tarball of preloaded images
	I1208 01:53:26.960734 1136586 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:53:26.960819 1136586 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:53:26.960831 1136586 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:53:26.961033 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:26.980599 1136586 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:53:26.980630 1136586 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:53:26.980646 1136586 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:53:26.980676 1136586 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:53:26.980741 1136586 start.go:364] duration metric: took 41.994µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:53:26.980766 1136586 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:53:26.980775 1136586 fix.go:54] fixHost starting: 
	I1208 01:53:26.981064 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:26.998167 1136586 fix.go:112] recreateIfNeeded on newest-cni-457779: state=Stopped err=<nil>
	W1208 01:53:26.998205 1136586 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:53:25.593347 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:27.593483 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:30.093460 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:27.003360 1136586 out.go:252] * Restarting existing docker container for "newest-cni-457779" ...
	I1208 01:53:27.003497 1136586 cli_runner.go:164] Run: docker start newest-cni-457779
	I1208 01:53:27.261076 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:27.282732 1136586 kic.go:430] container "newest-cni-457779" state is running.
	I1208 01:53:27.283122 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:27.311045 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:27.311287 1136586 machine.go:94] provisionDockerMachine start ...
	I1208 01:53:27.311346 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:27.335078 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:27.335680 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:27.335692 1136586 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:53:27.336739 1136586 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:53:30.502303 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.502328 1136586 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:53:30.502403 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.520473 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.520821 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.520832 1136586 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:53:30.680340 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.680522 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.698887 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.699207 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.699230 1136586 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:53:30.850881 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:53:30.850907 1136586 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:53:30.850931 1136586 ubuntu.go:190] setting up certificates
	I1208 01:53:30.850939 1136586 provision.go:84] configureAuth start
	I1208 01:53:30.851000 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:30.868852 1136586 provision.go:143] copyHostCerts
	I1208 01:53:30.868925 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:53:30.868935 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:53:30.869018 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:53:30.869113 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:53:30.869119 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:53:30.869143 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:53:30.869192 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:53:30.869197 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:53:30.869218 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:53:30.869262 1136586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:53:31.146721 1136586 provision.go:177] copyRemoteCerts
	I1208 01:53:31.146819 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:53:31.146887 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.165202 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.270344 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:53:31.288520 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:53:31.307009 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:53:31.325139 1136586 provision.go:87] duration metric: took 474.176778ms to configureAuth
	I1208 01:53:31.325166 1136586 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:53:31.325413 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:31.325428 1136586 machine.go:97] duration metric: took 4.014132188s to provisionDockerMachine
	I1208 01:53:31.325438 1136586 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:53:31.325453 1136586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:53:31.325527 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:53:31.325572 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.342958 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.450484 1136586 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:53:31.453930 1136586 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:53:31.453961 1136586 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:53:31.453978 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:53:31.454035 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:53:31.454126 1136586 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:53:31.454236 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:53:31.461814 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:31.480492 1136586 start.go:296] duration metric: took 155.029827ms for postStartSetup
	I1208 01:53:31.480576 1136586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:53:31.480620 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.498567 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.608416 1136586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:53:31.613302 1136586 fix.go:56] duration metric: took 4.632518901s for fixHost
	I1208 01:53:31.613327 1136586 start.go:83] releasing machines lock for "newest-cni-457779", held for 4.632572375s
	I1208 01:53:31.613414 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:31.630699 1136586 ssh_runner.go:195] Run: cat /version.json
	I1208 01:53:31.630750 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.630785 1136586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:53:31.630847 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.650759 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.653824 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.754273 1136586 ssh_runner.go:195] Run: systemctl --version
	I1208 01:53:31.849639 1136586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:53:31.855754 1136586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:53:31.855850 1136586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:53:31.866557 1136586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:53:31.866588 1136586 start.go:496] detecting cgroup driver to use...
	I1208 01:53:31.866621 1136586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:53:31.866707 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:53:31.887994 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:53:31.906727 1136586 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:53:31.906830 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:53:31.922954 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:53:31.936664 1136586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:53:32.054316 1136586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:53:32.173483 1136586 docker.go:234] disabling docker service ...
	I1208 01:53:32.173578 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:53:32.189444 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:53:32.206742 1136586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:53:32.325262 1136586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:53:32.443602 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:53:32.456770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:53:32.473213 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:53:32.483724 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:53:32.493138 1136586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:53:32.493251 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:53:32.502652 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.512217 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:53:32.521333 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.530989 1136586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:53:32.539889 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:53:32.549127 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:53:32.558425 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:53:32.567684 1136586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:53:32.575542 1136586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:53:32.583139 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:32.723777 1136586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:53:32.846014 1136586 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:53:32.846088 1136586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:53:32.849865 1136586 start.go:564] Will wait 60s for crictl version
	I1208 01:53:32.849924 1136586 ssh_runner.go:195] Run: which crictl
	I1208 01:53:32.853562 1136586 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:53:32.880330 1136586 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:53:32.880452 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.901579 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.928462 1136586 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:53:32.931363 1136586 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:53:32.945897 1136586 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:53:32.950021 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:32.963090 1136586 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1208 01:53:32.593363 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:33.093099 1128548 node_ready.go:38] duration metric: took 6m0.00024354s for node "no-preload-536520" to be "Ready" ...
	I1208 01:53:33.096356 1128548 out.go:203] 
	W1208 01:53:33.099424 1128548 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:53:33.099449 1128548 out.go:285] * 
	W1208 01:53:33.101601 1128548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:53:33.103637 1128548 out.go:203] 
	I1208 01:53:32.966006 1136586 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:53:32.966181 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:32.966277 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.001671 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.001709 1136586 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:53:33.001783 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.037763 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.037789 1136586 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:53:33.037796 1136586 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:53:33.037895 1136586 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:53:33.037971 1136586 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:53:33.063762 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:33.063790 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:33.063814 1136586 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:53:33.063838 1136586 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:53:33.063976 1136586 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:53:33.064046 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:53:33.072124 1136586 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:53:33.072199 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:53:33.079978 1136586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:53:33.094440 1136586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:53:33.114285 1136586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:53:33.148370 1136586 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:53:33.154333 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:33.175383 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:33.368419 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:33.425889 1136586 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:53:33.425915 1136586 certs.go:195] generating shared ca certs ...
	I1208 01:53:33.425933 1136586 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:33.426101 1136586 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:53:33.426153 1136586 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:53:33.426161 1136586 certs.go:257] generating profile certs ...
	I1208 01:53:33.426267 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:53:33.426332 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:53:33.426377 1136586 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:53:33.426524 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:53:33.426568 1136586 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:53:33.426582 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:53:33.426612 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:53:33.426642 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:53:33.426669 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:53:33.426734 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:33.427335 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:53:33.467362 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:53:33.494653 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:53:33.520274 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:53:33.539143 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:53:33.558359 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:53:33.583585 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:53:33.606437 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:53:33.629051 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:53:33.649569 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:53:33.670329 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:53:33.709388 1136586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:53:33.723127 1136586 ssh_runner.go:195] Run: openssl version
	I1208 01:53:33.729848 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.737400 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:53:33.744968 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749630 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749695 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.792574 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:53:33.800140 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.812741 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:53:33.821534 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825755 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825831 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.873472 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:53:33.882187 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.890767 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:53:33.901446 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907874 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907943 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.952061 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:53:33.960568 1136586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:53:33.965214 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:53:34.008563 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:53:34.055484 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:53:34.112335 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:53:34.165388 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:53:34.216189 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:53:34.263034 1136586 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:34.263135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:53:34.263235 1136586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:53:34.294120 1136586 cri.go:89] found id: ""
	I1208 01:53:34.294243 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:53:34.304846 1136586 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:53:34.304879 1136586 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:53:34.304960 1136586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:53:34.316473 1136586 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:53:34.317189 1136586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.317527 1136586 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-457779" cluster setting kubeconfig missing "newest-cni-457779" context setting]
	I1208 01:53:34.318043 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.319993 1136586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:53:34.332564 1136586 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:53:34.332599 1136586 kubeadm.go:602] duration metric: took 27.712722ms to restartPrimaryControlPlane
	I1208 01:53:34.332638 1136586 kubeadm.go:403] duration metric: took 69.60712ms to StartCluster
	I1208 01:53:34.332662 1136586 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.332751 1136586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.333761 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.334050 1136586 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:53:34.334509 1136586 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:53:34.334590 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:34.334604 1136586 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-457779"
	I1208 01:53:34.334619 1136586 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-457779"
	I1208 01:53:34.334646 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.334654 1136586 addons.go:70] Setting dashboard=true in profile "newest-cni-457779"
	I1208 01:53:34.334664 1136586 addons.go:239] Setting addon dashboard=true in "newest-cni-457779"
	W1208 01:53:34.334680 1136586 addons.go:248] addon dashboard should already be in state true
	I1208 01:53:34.334701 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.335128 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.335222 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.338384 1136586 out.go:179] * Verifying Kubernetes components...
	I1208 01:53:34.338808 1136586 addons.go:70] Setting default-storageclass=true in profile "newest-cni-457779"
	I1208 01:53:34.338830 1136586 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-457779"
	I1208 01:53:34.339192 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.342236 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:34.384696 1136586 addons.go:239] Setting addon default-storageclass=true in "newest-cni-457779"
	I1208 01:53:34.384738 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.385173 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.395531 1136586 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:53:34.398489 1136586 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:53:34.401766 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:53:34.401802 1136586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:53:34.401870 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.413624 1136586 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:53:34.416611 1136586 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.416635 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:53:34.416703 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.446412 1136586 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.446432 1136586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:53:34.446519 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.468661 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.486870 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.495400 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.648143 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:34.791310 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:53:34.791383 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:53:34.801259 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.809204 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.852787 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:53:34.852815 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:53:34.976510 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:53:34.976546 1136586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:53:35.059518 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:53:35.059546 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:53:35.081694 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:53:35.081725 1136586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:53:35.097221 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:53:35.097249 1136586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:53:35.113396 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:53:35.113423 1136586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:53:35.128309 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:53:35.128332 1136586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:53:35.144063 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.144088 1136586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:53:35.163973 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.343568 1136586 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:53:35.343639 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:35.343728 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343749 1136586 retry.go:31] will retry after 313.237886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343796 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343802 1136586 retry.go:31] will retry after 267.065812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343986 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343999 1136586 retry.go:31] will retry after 357.870271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.611924 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:35.657423 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:35.685479 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.685507 1136586 retry.go:31] will retry after 235.819569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.702853 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:35.745089 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.745200 1136586 retry.go:31] will retry after 496.615001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.783116 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.783150 1136586 retry.go:31] will retry after 415.603405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.844207 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:35.922577 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:35.992239 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.992284 1136586 retry.go:31] will retry after 419.233092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.199657 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.242360 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.275822 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.275881 1136586 retry.go:31] will retry after 506.304834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:36.313961 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.313996 1136586 retry.go:31] will retry after 341.203132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.344211 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:36.412076 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:36.475666 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.475724 1136586 retry.go:31] will retry after 757.567155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.656038 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.717469 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.717504 1136586 retry.go:31] will retry after 858.45693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.782939 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.844509 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:36.857199 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.857314 1136586 retry.go:31] will retry after 1.254351113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.233554 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:37.293681 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.293719 1136586 retry.go:31] will retry after 1.120312347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.343808 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:37.576883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:37.657137 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.657170 1136586 retry.go:31] will retry after 1.273828893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.844396 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.111904 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:38.175735 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.175771 1136586 retry.go:31] will retry after 1.371961744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.344170 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.414206 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:38.473557 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.473592 1136586 retry.go:31] will retry after 1.305474532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.843968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.931790 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:38.991073 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.991107 1136586 retry.go:31] will retry after 2.323329318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.344538 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:39.548354 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:39.614499 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.614532 1136586 retry.go:31] will retry after 2.345376349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.779883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:39.839516 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.839550 1136586 retry.go:31] will retry after 1.632764803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.843744 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.343857 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.844131 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.314885 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:41.344468 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:41.399054 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.399086 1136586 retry.go:31] will retry after 1.628703977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.473438 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:41.539567 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.539608 1136586 retry.go:31] will retry after 4.6526683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.844314 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.960631 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:42.037435 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.037475 1136586 retry.go:31] will retry after 2.24839836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.343723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:42.843913 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.028344 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:43.092228 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.092267 1136586 retry.go:31] will retry after 6.138872071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.843812 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:44.286696 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:44.343910 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:44.363154 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.363184 1136586 retry.go:31] will retry after 4.885412288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.843802 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.344023 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.844504 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.193193 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:46.256318 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.256352 1136586 retry.go:31] will retry after 6.576205276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.344576 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.844679 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.344358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.843925 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.231766 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:49.249321 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:49.295577 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.295606 1136586 retry.go:31] will retry after 5.897796539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:49.321879 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.321913 1136586 retry.go:31] will retry after 5.135606393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.343793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.843777 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.844708 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.344109 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.844601 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.344090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.833191 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:52.843854 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:52.942603 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:52.942641 1136586 retry.go:31] will retry after 10.350172314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:53.344347 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:53.843800 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.343948 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.457681 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:54.519827 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.519864 1136586 retry.go:31] will retry after 12.267694675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.844117 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.193625 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:55.256579 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.256612 1136586 retry.go:31] will retry after 11.163170119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.343847 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.843783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.343814 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.844654 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.344616 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.844487 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.343880 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.843787 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.343848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.843826 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.343799 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.844518 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.343861 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.844575 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.343756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.844391 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:03.293666 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:54:03.344443 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:03.397612 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.397650 1136586 retry.go:31] will retry after 19.276295687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.844417 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.343968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.344710 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.843828 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.420172 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:06.484485 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.484519 1136586 retry.go:31] will retry after 9.376809348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.788188 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:54:06.843694 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:06.852042 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.852079 1136586 retry.go:31] will retry after 14.243902866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:07.344022 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:07.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.344592 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.844723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.344453 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.843950 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.344400 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.844496 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.343717 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.844737 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.344750 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.843793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.343904 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.343908 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.844260 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.344591 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.843791 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.862033 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:15.923558 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:15.923598 1136586 retry.go:31] will retry after 11.623443237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:16.344246 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:16.844386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.344635 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.344732 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.843932 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.344121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.844530 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.344183 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.844204 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.097241 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:21.169765 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.169803 1136586 retry.go:31] will retry after 14.268049825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.343856 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.844672 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.344587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.674615 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:22.733064 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.733093 1136586 retry.go:31] will retry after 25.324201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.844513 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.344392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.844423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.343928 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.844484 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.344404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.844721 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.344197 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.844678 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.343798 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.547765 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:27.612562 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.612601 1136586 retry.go:31] will retry after 28.822296594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.344385 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.344796 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.344407 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.844544 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.343765 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.844221 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.343845 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.844333 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.344526 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.844321 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:34.344033 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:34.344149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:34.370172 1136586 cri.go:89] found id: ""
	I1208 01:54:34.370196 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.370205 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:34.370211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:34.370269 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:34.395619 1136586 cri.go:89] found id: ""
	I1208 01:54:34.395642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.395650 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:34.395656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:34.395720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:34.422963 1136586 cri.go:89] found id: ""
	I1208 01:54:34.422993 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.423003 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:34.423009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:34.423074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:34.451846 1136586 cri.go:89] found id: ""
	I1208 01:54:34.451871 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.451879 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:34.451886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:34.451951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:34.480597 1136586 cri.go:89] found id: ""
	I1208 01:54:34.480622 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.480631 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:34.480638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:34.480728 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:34.505381 1136586 cri.go:89] found id: ""
	I1208 01:54:34.505412 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.505421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:34.505427 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:34.505486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:34.531276 1136586 cri.go:89] found id: ""
	I1208 01:54:34.531304 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.531313 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:34.531320 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:34.531384 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:34.556518 1136586 cri.go:89] found id: ""
	I1208 01:54:34.556542 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.556550 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:34.556566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:34.556578 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:34.613370 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:34.613408 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:34.628308 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:34.628338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:34.694181 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:34.694202 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:34.694216 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:34.720374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:34.720425 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:35.438126 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:35.498508 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:35.498543 1136586 retry.go:31] will retry after 43.888808015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:37.252653 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:37.264309 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:37.264385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:37.296827 1136586 cri.go:89] found id: ""
	I1208 01:54:37.296856 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.296865 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:37.296872 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:37.296938 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:37.322795 1136586 cri.go:89] found id: ""
	I1208 01:54:37.322818 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.322826 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:37.322832 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:37.322890 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:37.347015 1136586 cri.go:89] found id: ""
	I1208 01:54:37.347039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.347048 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:37.347054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:37.347112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:37.376654 1136586 cri.go:89] found id: ""
	I1208 01:54:37.376685 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.376694 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:37.376702 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:37.376768 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:37.402392 1136586 cri.go:89] found id: ""
	I1208 01:54:37.402419 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.402428 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:37.402434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:37.402531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:37.427265 1136586 cri.go:89] found id: ""
	I1208 01:54:37.427292 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.427302 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:37.427308 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:37.427375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:37.452009 1136586 cri.go:89] found id: ""
	I1208 01:54:37.452036 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.452046 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:37.452052 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:37.452113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:37.478250 1136586 cri.go:89] found id: ""
	I1208 01:54:37.478274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.478282 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:37.478292 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:37.478303 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:37.492990 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:37.493059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:37.560010 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:37.560033 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:37.560046 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:37.586791 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:37.586827 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:37.617527 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:37.617603 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.174865 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:40.187458 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:40.187538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:40.216164 1136586 cri.go:89] found id: ""
	I1208 01:54:40.216195 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.216204 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:40.216211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:40.216280 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:40.243524 1136586 cri.go:89] found id: ""
	I1208 01:54:40.243552 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.243561 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:40.243567 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:40.243632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:40.273554 1136586 cri.go:89] found id: ""
	I1208 01:54:40.273582 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.273592 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:40.273598 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:40.273660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:40.301228 1136586 cri.go:89] found id: ""
	I1208 01:54:40.301249 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.301257 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:40.301263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:40.301321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:40.330159 1136586 cri.go:89] found id: ""
	I1208 01:54:40.330179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.330187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:40.330193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:40.330252 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:40.355514 1136586 cri.go:89] found id: ""
	I1208 01:54:40.355583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.355604 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:40.355611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:40.355685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:40.381442 1136586 cri.go:89] found id: ""
	I1208 01:54:40.381468 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.381477 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:40.381483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:40.381539 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:40.406014 1136586 cri.go:89] found id: ""
	I1208 01:54:40.406039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.406048 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:40.406057 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:40.406069 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:40.465966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:40.465986 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:40.466000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:40.490766 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:40.490799 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:40.518111 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:40.518140 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.573667 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:40.573702 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.088883 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:43.112185 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:43.112253 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:43.175929 1136586 cri.go:89] found id: ""
	I1208 01:54:43.175952 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.175960 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:43.175966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:43.176037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:43.208920 1136586 cri.go:89] found id: ""
	I1208 01:54:43.208946 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.208955 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:43.208961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:43.209024 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:43.235210 1136586 cri.go:89] found id: ""
	I1208 01:54:43.235235 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.235245 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:43.235252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:43.235319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:43.263618 1136586 cri.go:89] found id: ""
	I1208 01:54:43.263642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.263658 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:43.263666 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:43.263727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:43.290748 1136586 cri.go:89] found id: ""
	I1208 01:54:43.290783 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.290792 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:43.290798 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:43.290857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:43.314874 1136586 cri.go:89] found id: ""
	I1208 01:54:43.314898 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.314906 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:43.314913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:43.314975 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:43.339655 1136586 cri.go:89] found id: ""
	I1208 01:54:43.339680 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.339707 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:43.339713 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:43.339777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:43.364203 1136586 cri.go:89] found id: ""
	I1208 01:54:43.364230 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.364240 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:43.364250 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:43.364261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:43.390041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:43.390079 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:43.420626 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:43.420661 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:43.475834 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:43.475876 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.491658 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:43.491696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:43.559609 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.059911 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:46.070737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:46.070825 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:46.110556 1136586 cri.go:89] found id: ""
	I1208 01:54:46.110583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.110593 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:46.110600 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:46.110665 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:46.186917 1136586 cri.go:89] found id: ""
	I1208 01:54:46.186942 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.186951 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:46.186957 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:46.187021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:46.212604 1136586 cri.go:89] found id: ""
	I1208 01:54:46.212631 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.212639 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:46.212646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:46.212724 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:46.239989 1136586 cri.go:89] found id: ""
	I1208 01:54:46.240043 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.240054 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:46.240060 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:46.240217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:46.266799 1136586 cri.go:89] found id: ""
	I1208 01:54:46.266829 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.266839 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:46.266845 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:46.266918 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:46.294724 1136586 cri.go:89] found id: ""
	I1208 01:54:46.294753 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.294762 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:46.294769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:46.294829 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:46.320725 1136586 cri.go:89] found id: ""
	I1208 01:54:46.320754 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.320764 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:46.320771 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:46.320854 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:46.350768 1136586 cri.go:89] found id: ""
	I1208 01:54:46.350792 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.350801 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:46.350810 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:46.350822 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:46.416454 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.416490 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:46.416510 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:46.442082 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:46.442115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:46.474546 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:46.474573 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:46.532104 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:46.532141 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:48.057590 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:48.120301 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:48.120337 1136586 retry.go:31] will retry after 17.544839516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:49.047527 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:49.058154 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.058224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.087906 1136586 cri.go:89] found id: ""
	I1208 01:54:49.087974 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.087999 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.088010 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.088086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.147486 1136586 cri.go:89] found id: ""
	I1208 01:54:49.147562 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.147585 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.147603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.147699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.190637 1136586 cri.go:89] found id: ""
	I1208 01:54:49.190712 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.190735 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.190755 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.190842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.222497 1136586 cri.go:89] found id: ""
	I1208 01:54:49.222525 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.222534 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.222549 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.222624 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.247026 1136586 cri.go:89] found id: ""
	I1208 01:54:49.247052 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.247061 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.247067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.247125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:49.275349 1136586 cri.go:89] found id: ""
	I1208 01:54:49.275378 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.275387 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:49.275394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:49.275499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:49.300792 1136586 cri.go:89] found id: ""
	I1208 01:54:49.300820 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.300829 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:49.300835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:49.300892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:49.325853 1136586 cri.go:89] found id: ""
	I1208 01:54:49.325882 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.325890 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:49.325900 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:49.325912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:49.384418 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:49.384468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:49.399275 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:49.399307 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:49.466718 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:49.466785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:49.466814 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:49.491769 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:49.491803 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:52.023420 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:52.034753 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:52.034828 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:52.064923 1136586 cri.go:89] found id: ""
	I1208 01:54:52.064945 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.064953 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:52.064960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:52.065022 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:52.104945 1136586 cri.go:89] found id: ""
	I1208 01:54:52.104968 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.104977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:52.104983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:52.105043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:52.171374 1136586 cri.go:89] found id: ""
	I1208 01:54:52.171395 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.171404 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:52.171410 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:52.171468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:52.201431 1136586 cri.go:89] found id: ""
	I1208 01:54:52.201476 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.201485 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:52.201492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:52.201563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:52.226892 1136586 cri.go:89] found id: ""
	I1208 01:54:52.226920 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.226929 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:52.226935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:52.227001 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:52.252811 1136586 cri.go:89] found id: ""
	I1208 01:54:52.252891 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.252914 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:52.252935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:52.253034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:52.282156 1136586 cri.go:89] found id: ""
	I1208 01:54:52.282179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.282188 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:52.282195 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:52.282259 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:52.308580 1136586 cri.go:89] found id: ""
	I1208 01:54:52.308607 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.308618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:52.308628 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:52.308639 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:52.364992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:52.365028 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:52.379850 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:52.379877 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:52.445238 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:52.445260 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:52.445273 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:52.471470 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:52.471505 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:55.003548 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:55.026046 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:55.026131 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:55.053887 1136586 cri.go:89] found id: ""
	I1208 01:54:55.053964 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.053989 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:55.054009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:55.054101 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:55.088698 1136586 cri.go:89] found id: ""
	I1208 01:54:55.088724 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.088733 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:55.088760 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:55.088849 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:55.170740 1136586 cri.go:89] found id: ""
	I1208 01:54:55.170776 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.170785 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:55.170791 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:55.170899 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:55.197620 1136586 cri.go:89] found id: ""
	I1208 01:54:55.197656 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.197666 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:55.197690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:55.197776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:55.223553 1136586 cri.go:89] found id: ""
	I1208 01:54:55.223580 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.223589 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:55.223595 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:55.223680 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:55.248608 1136586 cri.go:89] found id: ""
	I1208 01:54:55.248677 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.248692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:55.248699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:55.248765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:55.274165 1136586 cri.go:89] found id: ""
	I1208 01:54:55.274232 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.274254 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:55.274272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:55.274361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:55.300558 1136586 cri.go:89] found id: ""
	I1208 01:54:55.300590 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.300600 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:55.300611 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:55.300622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:55.360386 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:55.360422 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:55.375869 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:55.375899 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:55.447970 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:55.447993 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:55.448005 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:55.473774 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:55.473808 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:56.435194 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:56.498425 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:54:56.498545 1136586 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:54:58.006121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:58.018380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:58.018521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:58.045144 1136586 cri.go:89] found id: ""
	I1208 01:54:58.045180 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.045189 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:58.045211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:58.045296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:58.071125 1136586 cri.go:89] found id: ""
	I1208 01:54:58.071151 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.071160 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:58.071167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:58.071226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:58.121465 1136586 cri.go:89] found id: ""
	I1208 01:54:58.121492 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.121511 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:58.121519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:58.121589 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:58.182249 1136586 cri.go:89] found id: ""
	I1208 01:54:58.182274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.182282 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:58.182288 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:58.182350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:58.211355 1136586 cri.go:89] found id: ""
	I1208 01:54:58.211380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.211389 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:58.211395 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:58.211458 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:58.239234 1136586 cri.go:89] found id: ""
	I1208 01:54:58.239262 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.239271 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:58.239278 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:58.239338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:58.268137 1136586 cri.go:89] found id: ""
	I1208 01:54:58.268212 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.268227 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:58.268235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:58.268311 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:58.298356 1136586 cri.go:89] found id: ""
	I1208 01:54:58.298380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.298389 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:58.298399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:58.298483 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:58.356947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:58.356983 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:58.371448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:58.371475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:58.435566 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:58.435589 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:58.435602 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:58.460122 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:58.460156 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:00.988330 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:00.999374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:00.999446 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:01.036571 1136586 cri.go:89] found id: ""
	I1208 01:55:01.036650 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.036687 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:01.036714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:01.036792 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:01.062231 1136586 cri.go:89] found id: ""
	I1208 01:55:01.062257 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.062267 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:01.062274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:01.062333 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:01.087570 1136586 cri.go:89] found id: ""
	I1208 01:55:01.087592 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.087601 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:01.087608 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:01.087668 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:01.137796 1136586 cri.go:89] found id: ""
	I1208 01:55:01.137822 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.137831 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:01.137838 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:01.137905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:01.193217 1136586 cri.go:89] found id: ""
	I1208 01:55:01.193240 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.193249 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:01.193256 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:01.193322 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:01.225114 1136586 cri.go:89] found id: ""
	I1208 01:55:01.225191 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.225217 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:01.225236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:01.225335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:01.253406 1136586 cri.go:89] found id: ""
	I1208 01:55:01.253485 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.253510 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:01.253529 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:01.253641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:01.279950 1136586 cri.go:89] found id: ""
	I1208 01:55:01.280032 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.280058 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:01.280077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:01.280102 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:01.314699 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:01.314731 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:01.371902 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:01.371941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:01.387482 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:01.387511 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:01.454737 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:01.454761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:01.454775 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:03.982003 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:03.993616 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:03.993689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:04.022115 1136586 cri.go:89] found id: ""
	I1208 01:55:04.022143 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.022152 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:04.022162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:04.022228 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:04.052694 1136586 cri.go:89] found id: ""
	I1208 01:55:04.052720 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.052730 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:04.052737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:04.052799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:04.077702 1136586 cri.go:89] found id: ""
	I1208 01:55:04.077728 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.077737 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:04.077750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:04.077812 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:04.141633 1136586 cri.go:89] found id: ""
	I1208 01:55:04.141668 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.141677 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:04.141683 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:04.141753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:04.188894 1136586 cri.go:89] found id: ""
	I1208 01:55:04.188962 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.188976 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:04.188983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:04.189051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:04.218926 1136586 cri.go:89] found id: ""
	I1208 01:55:04.218951 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.218960 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:04.218966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:04.219028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:04.244759 1136586 cri.go:89] found id: ""
	I1208 01:55:04.244786 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.244795 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:04.244802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:04.244885 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:04.270311 1136586 cri.go:89] found id: ""
	I1208 01:55:04.270337 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.270346 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:04.270377 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:04.270396 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:04.298563 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:04.298594 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:04.357076 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:04.357110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:04.372213 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:04.372255 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:04.437142 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:04.437163 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:04.437176 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:05.665650 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:55:05.727737 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:05.727864 1136586 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:06.963817 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:06.974536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:06.974639 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:06.999437 1136586 cri.go:89] found id: ""
	I1208 01:55:06.999466 1136586 logs.go:282] 0 containers: []
	W1208 01:55:06.999475 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:06.999481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:06.999540 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:07.029225 1136586 cri.go:89] found id: ""
	I1208 01:55:07.029253 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.029262 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:07.029274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:07.029343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:07.058657 1136586 cri.go:89] found id: ""
	I1208 01:55:07.058683 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.058692 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:07.058698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:07.058757 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:07.090130 1136586 cri.go:89] found id: ""
	I1208 01:55:07.090158 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.090168 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:07.090175 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:07.090236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:07.139122 1136586 cri.go:89] found id: ""
	I1208 01:55:07.139177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.139187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:07.139194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:07.139261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:07.172306 1136586 cri.go:89] found id: ""
	I1208 01:55:07.172328 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.172336 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:07.172343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:07.172400 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:07.204660 1136586 cri.go:89] found id: ""
	I1208 01:55:07.204689 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.204698 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:07.204705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:07.204764 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:07.230319 1136586 cri.go:89] found id: ""
	I1208 01:55:07.230349 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.230358 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:07.230368 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:07.230380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:07.285979 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:07.286015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:07.301365 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:07.301391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:07.369069 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:07.369140 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:07.369161 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:07.394018 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:07.394051 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:09.924985 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:09.935805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:09.935908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:09.962622 1136586 cri.go:89] found id: ""
	I1208 01:55:09.962647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.962656 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:09.962662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:09.962729 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:09.988243 1136586 cri.go:89] found id: ""
	I1208 01:55:09.988266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.988275 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:09.988283 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:09.988347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:10.019449 1136586 cri.go:89] found id: ""
	I1208 01:55:10.019482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.019492 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:10.019499 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:10.019570 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:10.048613 1136586 cri.go:89] found id: ""
	I1208 01:55:10.048637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.048646 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:10.048652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:10.048726 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:10.080915 1136586 cri.go:89] found id: ""
	I1208 01:55:10.080940 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.080949 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:10.080956 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:10.081021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:10.144352 1136586 cri.go:89] found id: ""
	I1208 01:55:10.144375 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.144384 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:10.144396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:10.144479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:10.182563 1136586 cri.go:89] found id: ""
	I1208 01:55:10.182586 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.182595 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:10.182601 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:10.182662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:10.213649 1136586 cri.go:89] found id: ""
	I1208 01:55:10.213682 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.213694 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:10.213706 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:10.213724 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:10.242084 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:10.242114 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:10.298146 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:10.298181 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:10.313543 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:10.313574 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:10.380205 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:10.380228 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:10.380248 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:12.905658 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:12.916576 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:12.916648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:12.944122 1136586 cri.go:89] found id: ""
	I1208 01:55:12.944146 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.944155 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:12.944161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:12.944222 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:12.969438 1136586 cri.go:89] found id: ""
	I1208 01:55:12.969464 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.969473 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:12.969481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:12.969542 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:12.997359 1136586 cri.go:89] found id: ""
	I1208 01:55:12.997388 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.997397 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:12.997403 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:12.997470 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:13.025718 1136586 cri.go:89] found id: ""
	I1208 01:55:13.025746 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.025756 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:13.025763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:13.025823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:13.056865 1136586 cri.go:89] found id: ""
	I1208 01:55:13.056892 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.056902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:13.056908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:13.056969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:13.082432 1136586 cri.go:89] found id: ""
	I1208 01:55:13.082528 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.082546 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:13.082554 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:13.082626 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:13.125069 1136586 cri.go:89] found id: ""
	I1208 01:55:13.125144 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.125168 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:13.125187 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:13.125272 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:13.178384 1136586 cri.go:89] found id: ""
	I1208 01:55:13.178482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.178507 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:13.178529 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:13.178567 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:13.239609 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:13.239644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:13.256212 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:13.256240 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:13.323842 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:13.323920 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:13.323949 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:13.348533 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:13.348570 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:15.879223 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:15.890243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:15.890364 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:15.914857 1136586 cri.go:89] found id: ""
	I1208 01:55:15.914886 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.914894 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:15.914901 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:15.914960 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:15.939097 1136586 cri.go:89] found id: ""
	I1208 01:55:15.939123 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.939134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:15.939140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:15.939201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:15.964064 1136586 cri.go:89] found id: ""
	I1208 01:55:15.964088 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.964097 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:15.964103 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:15.964167 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:15.989749 1136586 cri.go:89] found id: ""
	I1208 01:55:15.989789 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.989798 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:15.989805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:15.989864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:16.017523 1136586 cri.go:89] found id: ""
	I1208 01:55:16.017558 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.017567 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:16.017573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:16.017638 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:16.043968 1136586 cri.go:89] found id: ""
	I1208 01:55:16.043996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.044005 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:16.044012 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:16.044077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:16.068942 1136586 cri.go:89] found id: ""
	I1208 01:55:16.069012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.069038 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:16.069057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:16.069149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:16.110088 1136586 cri.go:89] found id: ""
	I1208 01:55:16.110117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.110127 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:16.110136 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:16.110147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:16.194161 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:16.194206 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:16.209083 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:16.209108 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:16.278327 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:16.278346 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:16.278361 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:16.304026 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:16.304059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:18.833542 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:18.844944 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:18.845029 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:18.871187 1136586 cri.go:89] found id: ""
	I1208 01:55:18.871210 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.871220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:18.871226 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:18.871287 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:18.899377 1136586 cri.go:89] found id: ""
	I1208 01:55:18.899399 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.899407 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:18.899413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:18.899473 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:18.924554 1136586 cri.go:89] found id: ""
	I1208 01:55:18.924578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.924587 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:18.924593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:18.924653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:18.949910 1136586 cri.go:89] found id: ""
	I1208 01:55:18.949932 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.949941 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:18.949947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:18.950008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:18.974978 1136586 cri.go:89] found id: ""
	I1208 01:55:18.975001 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.975009 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:18.975015 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:18.975074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:19.005380 1136586 cri.go:89] found id: ""
	I1208 01:55:19.005411 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.005421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:19.005429 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:19.005503 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:19.032668 1136586 cri.go:89] found id: ""
	I1208 01:55:19.032750 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.032765 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:19.032780 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:19.032843 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:19.059531 1136586 cri.go:89] found id: ""
	I1208 01:55:19.059562 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.059572 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:19.059602 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:19.059619 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:19.121579 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:19.121613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:19.138076 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:19.138103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:19.222963 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:19.222987 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:19.223000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:19.253325 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:19.253368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:19.388285 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:55:19.459805 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:19.459968 1136586 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:19.463177 1136586 out.go:179] * Enabled addons: 
	I1208 01:55:19.465938 1136586 addons.go:530] duration metric: took 1m45.131432136s for enable addons: enabled=[]
	I1208 01:55:21.781716 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:21.792431 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:21.792512 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:21.819119 1136586 cri.go:89] found id: ""
	I1208 01:55:21.819147 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.819157 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:21.819164 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:21.819230 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:21.848715 1136586 cri.go:89] found id: ""
	I1208 01:55:21.848751 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.848760 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:21.848767 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:21.848826 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:21.873926 1136586 cri.go:89] found id: ""
	I1208 01:55:21.873952 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.873961 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:21.873968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:21.874028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:21.900968 1136586 cri.go:89] found id: ""
	I1208 01:55:21.900995 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.901005 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:21.901011 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:21.901071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:21.929497 1136586 cri.go:89] found id: ""
	I1208 01:55:21.929524 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.929533 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:21.929540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:21.929600 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:21.954914 1136586 cri.go:89] found id: ""
	I1208 01:55:21.954936 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.954951 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:21.954959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:21.955020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:21.985551 1136586 cri.go:89] found id: ""
	I1208 01:55:21.985578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.985586 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:21.985593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:21.985656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:22.016148 1136586 cri.go:89] found id: ""
	I1208 01:55:22.016222 1136586 logs.go:282] 0 containers: []
	W1208 01:55:22.016244 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:22.016266 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:22.016305 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:22.049513 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:22.049585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:22.109605 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:22.109713 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:22.126061 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:22.126134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:22.225148 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:22.225170 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:22.225183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:24.750628 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:24.761806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:24.761883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:24.787831 1136586 cri.go:89] found id: ""
	I1208 01:55:24.787855 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.787864 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:24.787871 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:24.787931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:24.816489 1136586 cri.go:89] found id: ""
	I1208 01:55:24.816516 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.816526 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:24.816533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:24.816631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:24.843224 1136586 cri.go:89] found id: ""
	I1208 01:55:24.843247 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.843256 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:24.843262 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:24.843324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:24.869163 1136586 cri.go:89] found id: ""
	I1208 01:55:24.869186 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.869195 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:24.869202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:24.869261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:24.896657 1136586 cri.go:89] found id: ""
	I1208 01:55:24.896685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.896695 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:24.896701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:24.896763 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:24.924888 1136586 cri.go:89] found id: ""
	I1208 01:55:24.924918 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.924927 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:24.924934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:24.924999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:24.951093 1136586 cri.go:89] found id: ""
	I1208 01:55:24.951117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.951126 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:24.951133 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:24.951196 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:24.980609 1136586 cri.go:89] found id: ""
	I1208 01:55:24.980633 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.980642 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:24.980651 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:24.980662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:25.036369 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:25.036404 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:25.057565 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:25.057647 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:25.200105 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:25.200136 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:25.200151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:25.227358 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:25.227398 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:27.756955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:27.767899 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:27.767972 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:27.795426 1136586 cri.go:89] found id: ""
	I1208 01:55:27.795451 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.795460 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:27.795466 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:27.795529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:27.821100 1136586 cri.go:89] found id: ""
	I1208 01:55:27.821127 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.821137 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:27.821143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:27.821213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:27.851486 1136586 cri.go:89] found id: ""
	I1208 01:55:27.851509 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.851518 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:27.851524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:27.851583 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:27.881644 1136586 cri.go:89] found id: ""
	I1208 01:55:27.881665 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.881673 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:27.881681 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:27.881739 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:27.911149 1136586 cri.go:89] found id: ""
	I1208 01:55:27.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.911185 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:27.911191 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:27.911296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:27.935972 1136586 cri.go:89] found id: ""
	I1208 01:55:27.936042 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.936069 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:27.936084 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:27.936158 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:27.961735 1136586 cri.go:89] found id: ""
	I1208 01:55:27.961762 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.961772 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:27.961778 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:27.961845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:27.987428 1136586 cri.go:89] found id: ""
	I1208 01:55:27.987452 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.987461 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:27.987471 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:27.987482 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:28.018603 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:28.018646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:28.051322 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:28.051395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:28.116115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:28.116154 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:28.140270 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:28.140297 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:28.224200 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:30.725898 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:30.736353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:30.736438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:30.764621 1136586 cri.go:89] found id: ""
	I1208 01:55:30.764647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.764667 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:30.764691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:30.764772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:30.790477 1136586 cri.go:89] found id: ""
	I1208 01:55:30.790502 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.790510 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:30.790516 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:30.790577 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:30.816436 1136586 cri.go:89] found id: ""
	I1208 01:55:30.816522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.816539 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:30.816547 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:30.816625 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:30.845918 1136586 cri.go:89] found id: ""
	I1208 01:55:30.845944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.845953 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:30.845960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:30.846020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:30.870263 1136586 cri.go:89] found id: ""
	I1208 01:55:30.870307 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.870317 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:30.870323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:30.870388 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:30.896013 1136586 cri.go:89] found id: ""
	I1208 01:55:30.896041 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.896049 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:30.896057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:30.896174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:30.921585 1136586 cri.go:89] found id: ""
	I1208 01:55:30.921612 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.921621 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:30.921628 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:30.921689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:30.951330 1136586 cri.go:89] found id: ""
	I1208 01:55:30.951355 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.951365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:30.951374 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:30.951391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:30.977110 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:30.977151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:31.009469 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:31.009525 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:31.071586 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:31.071635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:31.087881 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:31.087927 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:31.188603 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:33.688896 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:33.699658 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:33.699730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:33.723918 1136586 cri.go:89] found id: ""
	I1208 01:55:33.723944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.723952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:33.723959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:33.724017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:33.748249 1136586 cri.go:89] found id: ""
	I1208 01:55:33.748272 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.748281 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:33.748287 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:33.748361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:33.774082 1136586 cri.go:89] found id: ""
	I1208 01:55:33.774165 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.774188 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:33.774208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:33.774300 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:33.804783 1136586 cri.go:89] found id: ""
	I1208 01:55:33.804808 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.804817 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:33.804824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:33.804883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:33.830537 1136586 cri.go:89] found id: ""
	I1208 01:55:33.830568 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.830578 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:33.830584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:33.830645 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:33.855676 1136586 cri.go:89] found id: ""
	I1208 01:55:33.855702 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.855711 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:33.855719 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:33.855788 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:33.881829 1136586 cri.go:89] found id: ""
	I1208 01:55:33.881907 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.881943 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:33.881968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:33.882061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:33.911849 1136586 cri.go:89] found id: ""
	I1208 01:55:33.911872 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.911880 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:33.911925 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:33.911937 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:33.939161 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:33.939188 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:33.997922 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:33.997962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:34.019097 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:34.019129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:34.086047 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:34.086070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:34.086081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:36.616392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:36.627074 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:36.627155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:36.655354 1136586 cri.go:89] found id: ""
	I1208 01:55:36.655378 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.655545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:36.655552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:36.655616 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:36.684592 1136586 cri.go:89] found id: ""
	I1208 01:55:36.684615 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.684623 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:36.684629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:36.684693 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:36.715198 1136586 cri.go:89] found id: ""
	I1208 01:55:36.715224 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.715233 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:36.715240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:36.715304 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:36.744302 1136586 cri.go:89] found id: ""
	I1208 01:55:36.744327 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.744337 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:36.744343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:36.744405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:36.769612 1136586 cri.go:89] found id: ""
	I1208 01:55:36.769637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.769646 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:36.769652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:36.769712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:36.796116 1136586 cri.go:89] found id: ""
	I1208 01:55:36.796138 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.796147 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:36.796153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:36.796212 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:36.824398 1136586 cri.go:89] found id: ""
	I1208 01:55:36.824424 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.824433 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:36.824439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:36.824543 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:36.849915 1136586 cri.go:89] found id: ""
	I1208 01:55:36.849942 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.849951 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:36.849960 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:36.849972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:36.904949 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:36.904986 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:36.919890 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:36.919919 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:36.983074 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:36.983095 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:36.983111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:37.008505 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:37.008605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.548042 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:39.558613 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:39.558684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:39.582845 1136586 cri.go:89] found id: ""
	I1208 01:55:39.582870 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.582878 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:39.582885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:39.582946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:39.607991 1136586 cri.go:89] found id: ""
	I1208 01:55:39.608016 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.608025 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:39.608032 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:39.608094 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:39.633661 1136586 cri.go:89] found id: ""
	I1208 01:55:39.633685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.633694 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:39.633701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:39.633765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:39.658962 1136586 cri.go:89] found id: ""
	I1208 01:55:39.658989 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.658998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:39.659005 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:39.659064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:39.684407 1136586 cri.go:89] found id: ""
	I1208 01:55:39.684490 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.684514 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:39.684534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:39.684622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:39.715084 1136586 cri.go:89] found id: ""
	I1208 01:55:39.715109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.715118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:39.715125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:39.715191 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:39.740328 1136586 cri.go:89] found id: ""
	I1208 01:55:39.740352 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.740361 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:39.740368 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:39.740457 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:39.771393 1136586 cri.go:89] found id: ""
	I1208 01:55:39.771420 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.771429 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:39.771438 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:39.771450 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:39.797255 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:39.797291 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.826926 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:39.826954 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:39.882889 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:39.882925 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:39.898019 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:39.898048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:39.963174 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.463393 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:42.473927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:42.474000 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:42.499722 1136586 cri.go:89] found id: ""
	I1208 01:55:42.499747 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.499757 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:42.499764 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:42.499842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:42.525555 1136586 cri.go:89] found id: ""
	I1208 01:55:42.525637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.525664 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:42.525671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:42.525745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:42.551105 1136586 cri.go:89] found id: ""
	I1208 01:55:42.551135 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.551144 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:42.551156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:42.551217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:42.576427 1136586 cri.go:89] found id: ""
	I1208 01:55:42.576500 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.576515 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:42.576522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:42.576587 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:42.606069 1136586 cri.go:89] found id: ""
	I1208 01:55:42.606102 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.606111 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:42.606118 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:42.606190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:42.631166 1136586 cri.go:89] found id: ""
	I1208 01:55:42.631193 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.631202 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:42.631208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:42.631267 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:42.655160 1136586 cri.go:89] found id: ""
	I1208 01:55:42.655238 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.655255 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:42.655266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:42.655329 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:42.680010 1136586 cri.go:89] found id: ""
	I1208 01:55:42.680085 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.680100 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:42.680111 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:42.680124 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:42.695151 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:42.695175 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:42.763022 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.763046 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:42.763059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:42.788301 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:42.788337 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:42.823956 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:42.823981 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.380090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:45.395413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:45.395485 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:45.439897 1136586 cri.go:89] found id: ""
	I1208 01:55:45.439925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.439935 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:45.439942 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:45.440007 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:45.465988 1136586 cri.go:89] found id: ""
	I1208 01:55:45.466012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.466020 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:45.466027 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:45.466099 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:45.491807 1136586 cri.go:89] found id: ""
	I1208 01:55:45.491834 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.491843 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:45.491850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:45.491913 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:45.516818 1136586 cri.go:89] found id: ""
	I1208 01:55:45.516843 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.516854 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:45.516861 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:45.516921 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:45.542497 1136586 cri.go:89] found id: ""
	I1208 01:55:45.542522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.542531 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:45.542538 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:45.542609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:45.568083 1136586 cri.go:89] found id: ""
	I1208 01:55:45.568109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.568118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:45.568125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:45.568183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:45.593517 1136586 cri.go:89] found id: ""
	I1208 01:55:45.593544 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.593554 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:45.593561 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:45.593674 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:45.618329 1136586 cri.go:89] found id: ""
	I1208 01:55:45.618356 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.618366 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:45.618375 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:45.618387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:45.682426 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:45.682475 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:45.682489 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:45.708017 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:45.708054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:45.737945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:45.737975 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.793795 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:45.793830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.309212 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:48.320148 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:48.320220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:48.367705 1136586 cri.go:89] found id: ""
	I1208 01:55:48.367730 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.367739 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:48.367745 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:48.367804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:48.421729 1136586 cri.go:89] found id: ""
	I1208 01:55:48.421754 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.421763 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:48.421769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:48.421827 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:48.447771 1136586 cri.go:89] found id: ""
	I1208 01:55:48.447795 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.447804 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:48.447810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:48.447869 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:48.473161 1136586 cri.go:89] found id: ""
	I1208 01:55:48.473187 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.473196 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:48.473203 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:48.473265 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:48.498698 1136586 cri.go:89] found id: ""
	I1208 01:55:48.498723 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.498732 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:48.498738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:48.498798 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:48.527882 1136586 cri.go:89] found id: ""
	I1208 01:55:48.527908 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.527918 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:48.527925 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:48.528028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:48.554285 1136586 cri.go:89] found id: ""
	I1208 01:55:48.554311 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.554319 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:48.554326 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:48.554385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:48.580502 1136586 cri.go:89] found id: ""
	I1208 01:55:48.580529 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.580538 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:48.580548 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:48.580580 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:48.610294 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:48.610319 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:48.665141 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:48.665179 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.682234 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:48.682262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:48.759351 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:48.759375 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:48.759387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:51.285923 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:51.298330 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:51.298405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:51.324185 1136586 cri.go:89] found id: ""
	I1208 01:55:51.324212 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.324220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:51.324227 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:51.324289 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:51.373377 1136586 cri.go:89] found id: ""
	I1208 01:55:51.373405 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.373414 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:51.373421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:51.373482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:51.433499 1136586 cri.go:89] found id: ""
	I1208 01:55:51.433522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.433531 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:51.433537 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:51.433595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:51.458517 1136586 cri.go:89] found id: ""
	I1208 01:55:51.458543 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.458552 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:51.458558 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:51.458622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:51.488348 1136586 cri.go:89] found id: ""
	I1208 01:55:51.488373 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.488382 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:51.488389 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:51.488471 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:51.513083 1136586 cri.go:89] found id: ""
	I1208 01:55:51.513109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.513119 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:51.513126 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:51.513190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:51.537741 1136586 cri.go:89] found id: ""
	I1208 01:55:51.537785 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.537804 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:51.537811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:51.537886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:51.563745 1136586 cri.go:89] found id: ""
	I1208 01:55:51.563769 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.563777 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:51.563786 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:51.563797 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:51.594103 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:51.594137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:51.650065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:51.650099 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:51.665199 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:51.665275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:51.732191 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:51.732221 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:51.732235 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.259222 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:54.271505 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:54.271585 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:54.300828 1136586 cri.go:89] found id: ""
	I1208 01:55:54.300860 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.300869 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:54.300875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:54.300944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:54.326203 1136586 cri.go:89] found id: ""
	I1208 01:55:54.326235 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.326245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:54.326251 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:54.326319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:54.392508 1136586 cri.go:89] found id: ""
	I1208 01:55:54.392537 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.392557 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:54.392564 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:54.392631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:54.443370 1136586 cri.go:89] found id: ""
	I1208 01:55:54.443403 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.443413 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:54.443419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:54.443479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:54.471931 1136586 cri.go:89] found id: ""
	I1208 01:55:54.471996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.472011 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:54.472018 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:54.472080 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:54.497863 1136586 cri.go:89] found id: ""
	I1208 01:55:54.497888 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.497897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:54.497905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:54.497966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:54.522372 1136586 cri.go:89] found id: ""
	I1208 01:55:54.522398 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.522408 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:54.522415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:54.522500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:54.549239 1136586 cri.go:89] found id: ""
	I1208 01:55:54.549266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.549275 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:54.549284 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:54.549316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:54.612864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:54.612887 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:54.612900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.639721 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:54.639758 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:54.671819 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:54.671845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:54.734691 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:54.734736 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.251176 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:57.261934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:57.262008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:57.287436 1136586 cri.go:89] found id: ""
	I1208 01:55:57.287460 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.287469 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:57.287476 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:57.287538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:57.313930 1136586 cri.go:89] found id: ""
	I1208 01:55:57.313953 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.313962 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:57.313968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:57.314028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:57.340222 1136586 cri.go:89] found id: ""
	I1208 01:55:57.340245 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.340254 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:57.340260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:57.340321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:57.380005 1136586 cri.go:89] found id: ""
	I1208 01:55:57.380028 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.380037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:57.380044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:57.380111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:57.421841 1136586 cri.go:89] found id: ""
	I1208 01:55:57.421863 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.421871 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:57.421877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:57.421935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:57.456549 1136586 cri.go:89] found id: ""
	I1208 01:55:57.456579 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.456588 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:57.456594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:57.456656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:57.480374 1136586 cri.go:89] found id: ""
	I1208 01:55:57.480472 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.480487 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:57.480494 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:57.480567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:57.504897 1136586 cri.go:89] found id: ""
	I1208 01:55:57.504925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.504935 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:57.504944 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:57.504955 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:57.530334 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:57.530377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:57.561764 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:57.561791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:57.620753 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:57.620788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.636064 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:57.636155 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:57.701326 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.203093 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:00.255847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:00.255935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:00.303978 1136586 cri.go:89] found id: ""
	I1208 01:56:00.304070 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.304095 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:00.304117 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:00.304214 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:00.413194 1136586 cri.go:89] found id: ""
	I1208 01:56:00.413283 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.413307 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:00.413328 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:00.413451 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:00.536345 1136586 cri.go:89] found id: ""
	I1208 01:56:00.536426 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.536462 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:00.536495 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:00.536582 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:00.570659 1136586 cri.go:89] found id: ""
	I1208 01:56:00.570746 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.570873 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:00.570915 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:00.571047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:00.600506 1136586 cri.go:89] found id: ""
	I1208 01:56:00.600542 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.600552 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:00.600559 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:00.600627 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:00.628998 1136586 cri.go:89] found id: ""
	I1208 01:56:00.629028 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.629037 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:00.629045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:00.629113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:00.655017 1136586 cri.go:89] found id: ""
	I1208 01:56:00.655055 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.655066 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:00.655073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:00.655136 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:00.687531 1136586 cri.go:89] found id: ""
	I1208 01:56:00.687555 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.687589 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:00.687601 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:00.687621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:00.716787 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:00.716826 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:00.773133 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:00.773171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:00.788167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:00.788194 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:00.851515 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.851539 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:00.851553 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.378410 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:03.388811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:03.388882 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:03.416483 1136586 cri.go:89] found id: ""
	I1208 01:56:03.416508 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.416517 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:03.416523 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:03.416584 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:03.444854 1136586 cri.go:89] found id: ""
	I1208 01:56:03.444879 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.444889 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:03.444896 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:03.444957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:03.471069 1136586 cri.go:89] found id: ""
	I1208 01:56:03.471096 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.471106 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:03.471113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:03.471174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:03.497488 1136586 cri.go:89] found id: ""
	I1208 01:56:03.497516 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.497525 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:03.497532 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:03.497592 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:03.523459 1136586 cri.go:89] found id: ""
	I1208 01:56:03.523485 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.523494 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:03.523501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:03.523564 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:03.553004 1136586 cri.go:89] found id: ""
	I1208 01:56:03.553030 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.553038 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:03.553045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:03.553104 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:03.582299 1136586 cri.go:89] found id: ""
	I1208 01:56:03.582325 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.582334 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:03.582340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:03.582398 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:03.628970 1136586 cri.go:89] found id: ""
	I1208 01:56:03.629036 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.629057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:03.629078 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:03.629116 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:03.693550 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:03.693861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:03.725106 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:03.725132 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:03.797949 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:03.797973 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:03.797985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.822975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:03.823012 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:06.351834 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:06.362738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:06.362832 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:06.388195 1136586 cri.go:89] found id: ""
	I1208 01:56:06.388222 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.388231 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:06.388238 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:06.388305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:06.413430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.413536 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.413559 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:06.413580 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:06.413657 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:06.438706 1136586 cri.go:89] found id: ""
	I1208 01:56:06.438770 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.438794 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:06.438813 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:06.438893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:06.463796 1136586 cri.go:89] found id: ""
	I1208 01:56:06.463860 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.463883 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:06.463902 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:06.463979 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:06.493653 1136586 cri.go:89] found id: ""
	I1208 01:56:06.493719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.493743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:06.493761 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:06.493839 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:06.518393 1136586 cri.go:89] found id: ""
	I1208 01:56:06.518490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.518516 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:06.518540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:06.518628 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:06.547357 1136586 cri.go:89] found id: ""
	I1208 01:56:06.547423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.547444 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:06.547464 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:06.547537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:06.572430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.572460 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.572469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:06.572479 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:06.572520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:06.631771 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:06.631805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:06.648910 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:06.648992 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:06.719373 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:06.719447 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:06.719474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:06.744508 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:06.744540 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:09.275604 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:09.286432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:09.286521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:09.312708 1136586 cri.go:89] found id: ""
	I1208 01:56:09.312733 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.312742 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:09.312749 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:09.312809 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:09.341427 1136586 cri.go:89] found id: ""
	I1208 01:56:09.341452 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.341461 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:09.341468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:09.341533 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:09.364765 1136586 cri.go:89] found id: ""
	I1208 01:56:09.364791 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.364801 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:09.364808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:09.364871 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:09.390922 1136586 cri.go:89] found id: ""
	I1208 01:56:09.390950 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.390959 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:09.390965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:09.391027 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:09.415255 1136586 cri.go:89] found id: ""
	I1208 01:56:09.415279 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.415288 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:09.415294 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:09.415351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:09.443874 1136586 cri.go:89] found id: ""
	I1208 01:56:09.443898 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.443907 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:09.443913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:09.443973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:09.473821 1136586 cri.go:89] found id: ""
	I1208 01:56:09.473846 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.473855 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:09.473862 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:09.473920 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:09.502023 1136586 cri.go:89] found id: ""
	I1208 01:56:09.502048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.502057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:09.502066 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:09.502077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:09.557585 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:09.557621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:09.572644 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:09.572673 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:09.660866 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:09.660889 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:09.660902 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:09.687200 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:09.687238 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:12.215648 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:12.227315 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:12.227391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:12.254343 1136586 cri.go:89] found id: ""
	I1208 01:56:12.254369 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.254378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:12.254385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:12.254467 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:12.279481 1136586 cri.go:89] found id: ""
	I1208 01:56:12.279550 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.279574 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:12.279594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:12.279683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:12.305844 1136586 cri.go:89] found id: ""
	I1208 01:56:12.305910 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.305933 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:12.305951 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:12.306041 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:12.330060 1136586 cri.go:89] found id: ""
	I1208 01:56:12.330139 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.330162 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:12.330181 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:12.330273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:12.356745 1136586 cri.go:89] found id: ""
	I1208 01:56:12.356813 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.356840 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:12.356858 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:12.356943 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:12.386368 1136586 cri.go:89] found id: ""
	I1208 01:56:12.386475 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.386492 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:12.386500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:12.386563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:12.412659 1136586 cri.go:89] found id: ""
	I1208 01:56:12.412685 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.412694 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:12.412700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:12.412779 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:12.440569 1136586 cri.go:89] found id: ""
	I1208 01:56:12.440596 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.440604 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:12.440615 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:12.440626 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:12.496637 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:12.496674 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:12.511594 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:12.511624 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:12.580748 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:12.580771 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:12.580784 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:12.613723 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:12.613802 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.152673 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:15.163614 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:15.163688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:15.192414 1136586 cri.go:89] found id: ""
	I1208 01:56:15.192449 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.192458 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:15.192465 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:15.192537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:15.219157 1136586 cri.go:89] found id: ""
	I1208 01:56:15.219182 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.219191 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:15.219198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:15.219258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:15.244756 1136586 cri.go:89] found id: ""
	I1208 01:56:15.244824 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.244839 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:15.244846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:15.244907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:15.271473 1136586 cri.go:89] found id: ""
	I1208 01:56:15.271546 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.271562 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:15.271569 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:15.271637 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:15.297385 1136586 cri.go:89] found id: ""
	I1208 01:56:15.297411 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.297430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:15.297437 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:15.297506 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:15.323057 1136586 cri.go:89] found id: ""
	I1208 01:56:15.323127 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.323149 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:15.323158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:15.323226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:15.348696 1136586 cri.go:89] found id: ""
	I1208 01:56:15.348771 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.348788 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:15.348795 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:15.348857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:15.373461 1136586 cri.go:89] found id: ""
	I1208 01:56:15.373483 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.373491 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:15.373500 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:15.373512 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.403816 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:15.403845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:15.463833 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:15.463875 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:15.479494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:15.479522 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:15.551161 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:15.551185 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:15.551199 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.077116 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:18.087881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:18.087956 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:18.116452 1136586 cri.go:89] found id: ""
	I1208 01:56:18.116480 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.116490 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:18.116497 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:18.116558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:18.147311 1136586 cri.go:89] found id: ""
	I1208 01:56:18.147339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.147347 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:18.147353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:18.147415 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:18.173654 1136586 cri.go:89] found id: ""
	I1208 01:56:18.173680 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.173689 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:18.173695 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:18.173754 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:18.198118 1136586 cri.go:89] found id: ""
	I1208 01:56:18.198142 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.198151 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:18.198158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:18.198220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:18.229347 1136586 cri.go:89] found id: ""
	I1208 01:56:18.229371 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.229379 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:18.229385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:18.229443 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:18.253505 1136586 cri.go:89] found id: ""
	I1208 01:56:18.253528 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.253536 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:18.253542 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:18.253601 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:18.279471 1136586 cri.go:89] found id: ""
	I1208 01:56:18.279496 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.279506 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:18.279513 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:18.279571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:18.309796 1136586 cri.go:89] found id: ""
	I1208 01:56:18.309819 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.309827 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:18.309839 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:18.309850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:18.366744 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:18.366779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:18.381719 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:18.381749 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:18.448045 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:18.448070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:18.448082 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.473293 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:18.473332 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.004404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:21.017333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:21.017424 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:21.042756 1136586 cri.go:89] found id: ""
	I1208 01:56:21.042823 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.042839 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:21.042847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:21.042907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:21.068017 1136586 cri.go:89] found id: ""
	I1208 01:56:21.068042 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.068051 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:21.068057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:21.068134 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:21.095695 1136586 cri.go:89] found id: ""
	I1208 01:56:21.095719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.095729 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:21.095735 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:21.095833 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:21.126473 1136586 cri.go:89] found id: ""
	I1208 01:56:21.126499 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.126508 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:21.126515 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:21.126578 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:21.159320 1136586 cri.go:89] found id: ""
	I1208 01:56:21.159344 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.159354 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:21.159360 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:21.159421 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:21.189716 1136586 cri.go:89] found id: ""
	I1208 01:56:21.189740 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.189790 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:21.189808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:21.189875 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:21.215065 1136586 cri.go:89] found id: ""
	I1208 01:56:21.215090 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.215099 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:21.215105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:21.215186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:21.239527 1136586 cri.go:89] found id: ""
	I1208 01:56:21.239551 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.239559 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:21.239568 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:21.239581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:21.303585 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:21.303607 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:21.303622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:21.329232 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:21.329269 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.357399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:21.357429 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:21.413905 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:21.413941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:23.930606 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:23.941524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:23.941609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:23.969400 1136586 cri.go:89] found id: ""
	I1208 01:56:23.969431 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.969441 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:23.969447 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:23.969510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:23.999105 1136586 cri.go:89] found id: ""
	I1208 01:56:23.999131 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.999140 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:23.999147 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:23.999216 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:24.031489 1136586 cri.go:89] found id: ""
	I1208 01:56:24.031517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.031527 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:24.031533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:24.031598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:24.057876 1136586 cri.go:89] found id: ""
	I1208 01:56:24.057902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.057911 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:24.057917 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:24.057978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:24.092220 1136586 cri.go:89] found id: ""
	I1208 01:56:24.092247 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.092257 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:24.092263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:24.092324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:24.125261 1136586 cri.go:89] found id: ""
	I1208 01:56:24.125289 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.125298 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:24.125306 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:24.125367 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:24.153744 1136586 cri.go:89] found id: ""
	I1208 01:56:24.153772 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.153782 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:24.153789 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:24.153852 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:24.179839 1136586 cri.go:89] found id: ""
	I1208 01:56:24.179866 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.179875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:24.179884 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:24.179916 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:24.237479 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:24.237514 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:24.252654 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:24.252693 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:24.325211 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:24.325232 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:24.325244 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:24.351049 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:24.351084 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:26.879645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:26.891936 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:26.892009 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:26.916974 1136586 cri.go:89] found id: ""
	I1208 01:56:26.916998 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.917007 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:26.917013 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:26.917072 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:26.941861 1136586 cri.go:89] found id: ""
	I1208 01:56:26.941885 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.941894 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:26.941900 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:26.941963 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:26.974560 1136586 cri.go:89] found id: ""
	I1208 01:56:26.974587 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.974596 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:26.974602 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:26.974663 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:26.999892 1136586 cri.go:89] found id: ""
	I1208 01:56:26.999921 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.999930 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:26.999937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:27.000021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:27.030397 1136586 cri.go:89] found id: ""
	I1208 01:56:27.030421 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.030430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:27.030436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:27.030521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:27.059896 1136586 cri.go:89] found id: ""
	I1208 01:56:27.059923 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.059932 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:27.059941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:27.059999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:27.084629 1136586 cri.go:89] found id: ""
	I1208 01:56:27.084656 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.084665 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:27.084671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:27.084733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:27.119162 1136586 cri.go:89] found id: ""
	I1208 01:56:27.119185 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.119193 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:27.119202 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:27.119213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:27.179450 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:27.179487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:27.194459 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:27.194486 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:27.261775 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:27.261797 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:27.261810 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:27.287303 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:27.287338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:29.820302 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:29.830851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:29.830917 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:29.869685 1136586 cri.go:89] found id: ""
	I1208 01:56:29.869717 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.869726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:29.869733 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:29.869789 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:29.904021 1136586 cri.go:89] found id: ""
	I1208 01:56:29.904048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.904057 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:29.904063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:29.904122 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:29.929826 1136586 cri.go:89] found id: ""
	I1208 01:56:29.929854 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.929864 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:29.929870 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:29.929935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:29.954915 1136586 cri.go:89] found id: ""
	I1208 01:56:29.954939 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.954947 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:29.954954 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:29.955013 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:29.980194 1136586 cri.go:89] found id: ""
	I1208 01:56:29.980218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.980227 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:29.980233 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:29.980296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:30.034520 1136586 cri.go:89] found id: ""
	I1208 01:56:30.034556 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.034566 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:30.034573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:30.034648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:30.069395 1136586 cri.go:89] found id: ""
	I1208 01:56:30.069422 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.069432 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:30.069439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:30.069507 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:30.109430 1136586 cri.go:89] found id: ""
	I1208 01:56:30.109459 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.109469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:30.109479 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:30.109491 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:30.146595 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:30.146631 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:30.206376 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:30.206419 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:30.225510 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:30.225621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:30.296464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:30.296484 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:30.296497 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:32.823121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:32.833454 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:32.833529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:32.868695 1136586 cri.go:89] found id: ""
	I1208 01:56:32.868721 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.868740 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:32.868747 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:32.868821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:32.906232 1136586 cri.go:89] found id: ""
	I1208 01:56:32.906253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.906261 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:32.906267 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:32.906327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:32.932154 1136586 cri.go:89] found id: ""
	I1208 01:56:32.932181 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.932190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:32.932200 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:32.932262 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:32.957782 1136586 cri.go:89] found id: ""
	I1208 01:56:32.957805 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.957814 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:32.957821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:32.957886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:32.983951 1136586 cri.go:89] found id: ""
	I1208 01:56:32.983978 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.983988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:32.983995 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:32.984057 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:33.011290 1136586 cri.go:89] found id: ""
	I1208 01:56:33.011316 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.011325 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:33.011340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:33.011410 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:33.038703 1136586 cri.go:89] found id: ""
	I1208 01:56:33.038726 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.038735 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:33.038741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:33.038799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:33.063041 1136586 cri.go:89] found id: ""
	I1208 01:56:33.063065 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.063074 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:33.063084 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:33.063115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:33.078006 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:33.078036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:33.170567 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:33.170591 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:33.170607 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:33.196077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:33.196111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:33.227121 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:33.227152 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:35.783290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:35.793700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:35.793778 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:35.821903 1136586 cri.go:89] found id: ""
	I1208 01:56:35.821937 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.821946 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:35.821953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:35.822014 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:35.854878 1136586 cri.go:89] found id: ""
	I1208 01:56:35.854902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.854910 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:35.854916 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:35.854978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:35.881395 1136586 cri.go:89] found id: ""
	I1208 01:56:35.881418 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.881426 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:35.881432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:35.881490 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:35.910658 1136586 cri.go:89] found id: ""
	I1208 01:56:35.910679 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.910688 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:35.910694 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:35.910753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:35.939089 1136586 cri.go:89] found id: ""
	I1208 01:56:35.939114 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.939129 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:35.939137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:35.939199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:35.964135 1136586 cri.go:89] found id: ""
	I1208 01:56:35.964158 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.964166 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:35.964173 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:35.964235 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:35.990669 1136586 cri.go:89] found id: ""
	I1208 01:56:35.990692 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.990701 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:35.990707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:35.990770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:36.020165 1136586 cri.go:89] found id: ""
	I1208 01:56:36.020191 1136586 logs.go:282] 0 containers: []
	W1208 01:56:36.020207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:36.020217 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:36.020228 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:36.076411 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:36.076452 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:36.093602 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:36.093683 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:36.181516 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:36.181540 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:36.181552 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:36.207107 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:36.207142 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:38.735690 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:38.746691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:38.746767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:38.773309 1136586 cri.go:89] found id: ""
	I1208 01:56:38.773339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.773349 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:38.773356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:38.773423 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:38.801208 1136586 cri.go:89] found id: ""
	I1208 01:56:38.801235 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.801245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:38.801254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:38.801317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:38.826539 1136586 cri.go:89] found id: ""
	I1208 01:56:38.826566 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.826575 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:38.826582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:38.826642 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:38.865488 1136586 cri.go:89] found id: ""
	I1208 01:56:38.865517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.865527 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:38.865533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:38.865594 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:38.900627 1136586 cri.go:89] found id: ""
	I1208 01:56:38.900655 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.900664 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:38.900670 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:38.900733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:38.927847 1136586 cri.go:89] found id: ""
	I1208 01:56:38.927871 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.927880 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:38.927887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:38.927949 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:38.952594 1136586 cri.go:89] found id: ""
	I1208 01:56:38.952666 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.952689 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:38.952714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:38.952803 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:38.978089 1136586 cri.go:89] found id: ""
	I1208 01:56:38.978116 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.978125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:38.978134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:38.978147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:39.047378 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:39.047401 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:39.047414 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:39.073359 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:39.073402 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:39.112761 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:39.112796 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:39.176177 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:39.176214 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:41.692238 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:41.702585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:41.702656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:41.726879 1136586 cri.go:89] found id: ""
	I1208 01:56:41.726913 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.726923 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:41.726930 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:41.726996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:41.752119 1136586 cri.go:89] found id: ""
	I1208 01:56:41.752143 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.752152 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:41.752158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:41.752215 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:41.777446 1136586 cri.go:89] found id: ""
	I1208 01:56:41.777473 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.777482 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:41.777488 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:41.777548 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:41.804077 1136586 cri.go:89] found id: ""
	I1208 01:56:41.804103 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.804112 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:41.804119 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:41.804179 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:41.828883 1136586 cri.go:89] found id: ""
	I1208 01:56:41.828908 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.828917 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:41.828924 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:41.828987 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:41.875100 1136586 cri.go:89] found id: ""
	I1208 01:56:41.875128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.875138 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:41.875145 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:41.875204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:41.907099 1136586 cri.go:89] found id: ""
	I1208 01:56:41.907126 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.907136 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:41.907142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:41.907201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:41.936702 1136586 cri.go:89] found id: ""
	I1208 01:56:41.936729 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.936738 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:41.936748 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:41.936780 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:41.992993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:41.993029 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:42.008895 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:42.008988 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:42.090561 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:42.090592 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:42.090605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:42.127950 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:42.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:44.678288 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:44.690356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:44.690429 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:44.716072 1136586 cri.go:89] found id: ""
	I1208 01:56:44.716095 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.716105 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:44.716111 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:44.716173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:44.742318 1136586 cri.go:89] found id: ""
	I1208 01:56:44.742347 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.742357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:44.742363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:44.742428 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:44.768786 1136586 cri.go:89] found id: ""
	I1208 01:56:44.768814 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.768824 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:44.768830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:44.768892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:44.794997 1136586 cri.go:89] found id: ""
	I1208 01:56:44.795020 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.795028 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:44.795035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:44.795093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:44.824626 1136586 cri.go:89] found id: ""
	I1208 01:56:44.824693 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.824719 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:44.824738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:44.824823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:44.854631 1136586 cri.go:89] found id: ""
	I1208 01:56:44.854660 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.854682 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:44.854707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:44.854790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:44.886832 1136586 cri.go:89] found id: ""
	I1208 01:56:44.886853 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.886862 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:44.886868 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:44.886931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:44.918383 1136586 cri.go:89] found id: ""
	I1208 01:56:44.918409 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.918420 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:44.918430 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:44.918441 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:44.974124 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:44.974160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:44.989499 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:44.989581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:45.183353 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:45.183384 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:45.183415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:45.225041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:45.225130 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:47.776374 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:47.786874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:47.786944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:47.817071 1136586 cri.go:89] found id: ""
	I1208 01:56:47.817097 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.817106 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:47.817113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:47.817173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:47.848935 1136586 cri.go:89] found id: ""
	I1208 01:56:47.848964 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.848972 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:47.848978 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:47.849039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:47.879145 1136586 cri.go:89] found id: ""
	I1208 01:56:47.879175 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.879190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:47.879196 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:47.879255 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:47.919571 1136586 cri.go:89] found id: ""
	I1208 01:56:47.919595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.919605 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:47.919612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:47.919678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:47.945072 1136586 cri.go:89] found id: ""
	I1208 01:56:47.945098 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.945107 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:47.945113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:47.945176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:47.972399 1136586 cri.go:89] found id: ""
	I1208 01:56:47.972423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.972432 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:47.972446 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:47.972513 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:47.998198 1136586 cri.go:89] found id: ""
	I1208 01:56:47.998225 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.998234 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:47.998240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:47.998357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:48.026417 1136586 cri.go:89] found id: ""
	I1208 01:56:48.026469 1136586 logs.go:282] 0 containers: []
	W1208 01:56:48.026480 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:48.026514 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:48.026534 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:48.083726 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:48.083765 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:48.102473 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:48.102503 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:48.195413 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:48.195448 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:48.195461 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:48.222088 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:48.222125 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:50.752185 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.763217 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:50.763296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:50.792851 1136586 cri.go:89] found id: ""
	I1208 01:56:50.792877 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.792886 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:50.792893 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:50.792952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:50.818544 1136586 cri.go:89] found id: ""
	I1208 01:56:50.818573 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.818582 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:50.818590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:50.818653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:50.856256 1136586 cri.go:89] found id: ""
	I1208 01:56:50.856286 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.856296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:50.856303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:50.856365 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:50.890254 1136586 cri.go:89] found id: ""
	I1208 01:56:50.890277 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.890286 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:50.890292 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:50.890351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:50.919013 1136586 cri.go:89] found id: ""
	I1208 01:56:50.919039 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.919048 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:50.919054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:50.919115 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:50.943865 1136586 cri.go:89] found id: ""
	I1208 01:56:50.943888 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.943897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:50.943903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:50.943968 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:50.967885 1136586 cri.go:89] found id: ""
	I1208 01:56:50.967912 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.967921 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:50.967927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:50.967984 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:50.997744 1136586 cri.go:89] found id: ""
	I1208 01:56:50.997779 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.997788 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:50.997854 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:50.997874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:51.066108 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:51.066131 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:51.066144 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:51.092098 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:51.092134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:51.129363 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:51.129392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:51.192049 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:51.192086 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:53.707235 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.718177 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:53.718245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:53.743649 1136586 cri.go:89] found id: ""
	I1208 01:56:53.743674 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.743684 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:53.743690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:53.743755 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:53.769475 1136586 cri.go:89] found id: ""
	I1208 01:56:53.769503 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.769512 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:53.769519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:53.769581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:53.795104 1136586 cri.go:89] found id: ""
	I1208 01:56:53.795128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.795137 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:53.795143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:53.795219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:53.824300 1136586 cri.go:89] found id: ""
	I1208 01:56:53.824322 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.824335 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:53.824342 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:53.824403 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:53.858957 1136586 cri.go:89] found id: ""
	I1208 01:56:53.858984 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.858993 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:53.858999 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:53.859059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:53.889936 1136586 cri.go:89] found id: ""
	I1208 01:56:53.889958 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.889967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:53.889974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:53.890042 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:53.917197 1136586 cri.go:89] found id: ""
	I1208 01:56:53.917221 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.917230 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:53.917236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:53.917301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:53.944246 1136586 cri.go:89] found id: ""
	I1208 01:56:53.944313 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.944340 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:53.944364 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:53.944395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:54.000224 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:54.000263 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:54.018576 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:54.018610 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:54.091957 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:54.092037 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:54.092064 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:54.121226 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:54.121262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.665113 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.675727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:56.675793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:56.702486 1136586 cri.go:89] found id: ""
	I1208 01:56:56.702512 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.702521 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:56.702536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:56.702595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:56.727464 1136586 cri.go:89] found id: ""
	I1208 01:56:56.727490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.727499 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:56.727506 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:56.727574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:56.755210 1136586 cri.go:89] found id: ""
	I1208 01:56:56.755242 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.755252 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:56.755259 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:56.755317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:56.780366 1136586 cri.go:89] found id: ""
	I1208 01:56:56.780394 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.780403 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:56.780409 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:56.780502 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:56.805514 1136586 cri.go:89] found id: ""
	I1208 01:56:56.805541 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.805551 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:56.805557 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:56.805615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:56.830960 1136586 cri.go:89] found id: ""
	I1208 01:56:56.830985 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.830994 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:56.831001 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:56.831067 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:56.877742 1136586 cri.go:89] found id: ""
	I1208 01:56:56.877812 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.877847 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:56.877873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:56.877969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:56.909088 1136586 cri.go:89] found id: ""
	I1208 01:56:56.909173 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.909197 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:56.909218 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:56.909261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:56.937087 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:56.937122 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.964566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:56.964593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:57.025871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:57.025917 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:57.041167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:57.041200 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:57.113620 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:59.615300 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.625998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:59.626071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:59.651013 1136586 cri.go:89] found id: ""
	I1208 01:56:59.651040 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.651050 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:59.651058 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:59.651140 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:59.676526 1136586 cri.go:89] found id: ""
	I1208 01:56:59.676595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.676619 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:59.676632 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:59.676706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:59.705956 1136586 cri.go:89] found id: ""
	I1208 01:56:59.705982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.705992 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:59.705998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:59.706058 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:59.732960 1136586 cri.go:89] found id: ""
	I1208 01:56:59.732988 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.732998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:59.733004 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:59.733064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:59.761227 1136586 cri.go:89] found id: ""
	I1208 01:56:59.761253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.761262 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:59.761268 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:59.761332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:59.795189 1136586 cri.go:89] found id: ""
	I1208 01:56:59.795218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.795227 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:59.795235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:59.795296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:59.820209 1136586 cri.go:89] found id: ""
	I1208 01:56:59.820278 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.820303 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:59.820317 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:59.820397 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:59.854906 1136586 cri.go:89] found id: ""
	I1208 01:56:59.854982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.855003 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:59.855031 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:59.855075 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:59.895804 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:59.895880 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:59.953038 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:59.953076 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:59.968348 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:59.968383 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:00.183275 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:00.183303 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:00.183318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:02.767941 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.778692 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:02.778767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:02.804099 1136586 cri.go:89] found id: ""
	I1208 01:57:02.804168 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.804192 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:02.804207 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:02.804282 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:02.829415 1136586 cri.go:89] found id: ""
	I1208 01:57:02.829442 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.829451 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:02.829456 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:02.829516 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:02.876418 1136586 cri.go:89] found id: ""
	I1208 01:57:02.876448 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.876456 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:02.876462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:02.876521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:02.908999 1136586 cri.go:89] found id: ""
	I1208 01:57:02.909021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.909030 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:02.909036 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:02.909095 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:02.935740 1136586 cri.go:89] found id: ""
	I1208 01:57:02.935763 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.935772 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:02.935781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:02.935845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:02.962615 1136586 cri.go:89] found id: ""
	I1208 01:57:02.962640 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.962649 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:02.962676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:02.962762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:02.988338 1136586 cri.go:89] found id: ""
	I1208 01:57:02.988413 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.988447 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:02.988469 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:02.988563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:03.016087 1136586 cri.go:89] found id: ""
	I1208 01:57:03.016115 1136586 logs.go:282] 0 containers: []
	W1208 01:57:03.016125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:03.016135 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:03.016147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:03.045768 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:03.045798 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:03.103820 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:03.103856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:03.119506 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:03.119544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:03.188553 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:03.188577 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:03.188591 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.714622 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.728070 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:05.728144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:05.752683 1136586 cri.go:89] found id: ""
	I1208 01:57:05.752709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.752718 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:05.752725 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:05.752804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:05.777888 1136586 cri.go:89] found id: ""
	I1208 01:57:05.777926 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.777935 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:05.777941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:05.778004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:05.803200 1136586 cri.go:89] found id: ""
	I1208 01:57:05.803227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.803236 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:05.803243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:05.803305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:05.828694 1136586 cri.go:89] found id: ""
	I1208 01:57:05.828719 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.828728 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:05.828734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:05.828795 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:05.871706 1136586 cri.go:89] found id: ""
	I1208 01:57:05.871734 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.871743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:05.871750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:05.871810 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:05.910109 1136586 cri.go:89] found id: ""
	I1208 01:57:05.910130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.910139 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:05.910146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:05.910211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:05.935420 1136586 cri.go:89] found id: ""
	I1208 01:57:05.935446 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.935455 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:05.935463 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:05.935524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:05.964805 1136586 cri.go:89] found id: ""
	I1208 01:57:05.964830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.964840 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:05.964850 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:05.964861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.991812 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:05.991850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:06.023289 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:06.023318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:06.079947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:06.079984 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:06.094973 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:06.095001 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:06.164494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:08.664783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.675873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:08.675951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:08.701544 1136586 cri.go:89] found id: ""
	I1208 01:57:08.701570 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.701579 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:08.701585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:08.701644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:08.726739 1136586 cri.go:89] found id: ""
	I1208 01:57:08.726761 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.726770 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:08.726777 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:08.726834 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:08.752551 1136586 cri.go:89] found id: ""
	I1208 01:57:08.752579 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.752590 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:08.752596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:08.752661 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:08.785394 1136586 cri.go:89] found id: ""
	I1208 01:57:08.785418 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.785427 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:08.785434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:08.785494 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:08.809379 1136586 cri.go:89] found id: ""
	I1208 01:57:08.809411 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.809420 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:08.809426 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:08.809493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:08.834793 1136586 cri.go:89] found id: ""
	I1208 01:57:08.834820 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.834829 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:08.834836 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:08.834895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:08.871040 1136586 cri.go:89] found id: ""
	I1208 01:57:08.871067 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.871077 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:08.871083 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:08.871149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:08.898916 1136586 cri.go:89] found id: ""
	I1208 01:57:08.898943 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.898953 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:08.898961 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:08.898973 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:08.958751 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:08.958791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:08.975804 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:08.975842 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:09.045728 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:09.045754 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:09.045768 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:09.071802 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:09.071844 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:11.602631 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.621366 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:11.621447 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:11.654343 1136586 cri.go:89] found id: ""
	I1208 01:57:11.654378 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.654387 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:11.654396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:11.654496 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:11.687384 1136586 cri.go:89] found id: ""
	I1208 01:57:11.687421 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.687431 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:11.687444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:11.687515 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:11.716671 1136586 cri.go:89] found id: ""
	I1208 01:57:11.716709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.716720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:11.716726 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:11.716796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:11.742357 1136586 cri.go:89] found id: ""
	I1208 01:57:11.742391 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.742400 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:11.742407 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:11.742493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:11.768963 1136586 cri.go:89] found id: ""
	I1208 01:57:11.768990 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.768999 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:11.769006 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:11.769075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:11.793322 1136586 cri.go:89] found id: ""
	I1208 01:57:11.793354 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.793364 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:11.793371 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:11.793438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:11.819428 1136586 cri.go:89] found id: ""
	I1208 01:57:11.819473 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.819483 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:11.819490 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:11.819561 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:11.856579 1136586 cri.go:89] found id: ""
	I1208 01:57:11.856620 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.856629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:11.856639 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:11.856650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:11.920066 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:11.920104 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:11.936490 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:11.936579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:12.003301 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:12.003353 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:12.003368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:12.034123 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:12.034162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:14.566675 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.577850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:14.577926 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:14.614645 1136586 cri.go:89] found id: ""
	I1208 01:57:14.614674 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.614683 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:14.614689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:14.614746 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:14.653668 1136586 cri.go:89] found id: ""
	I1208 01:57:14.653689 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.653698 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:14.653704 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:14.653760 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:14.683123 1136586 cri.go:89] found id: ""
	I1208 01:57:14.683147 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.683155 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:14.683162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:14.683220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:14.712290 1136586 cri.go:89] found id: ""
	I1208 01:57:14.712317 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.712326 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:14.712333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:14.712411 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:14.741728 1136586 cri.go:89] found id: ""
	I1208 01:57:14.741752 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.741761 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:14.741768 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:14.741830 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:14.766640 1136586 cri.go:89] found id: ""
	I1208 01:57:14.766675 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.766684 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:14.766690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:14.766749 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:14.795809 1136586 cri.go:89] found id: ""
	I1208 01:57:14.795833 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.795843 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:14.795850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:14.795908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:14.824523 1136586 cri.go:89] found id: ""
	I1208 01:57:14.824546 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.824555 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:14.824564 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:14.824579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:14.883992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:14.884032 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:14.899927 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:14.899958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:14.971584 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:14.971605 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:14.971618 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:14.997478 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:14.997516 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.562433 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.573169 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:17.573243 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:17.604838 1136586 cri.go:89] found id: ""
	I1208 01:57:17.604866 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.604879 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:17.604885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:17.604945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:17.651166 1136586 cri.go:89] found id: ""
	I1208 01:57:17.651193 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.651202 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:17.651208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:17.651275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:17.679266 1136586 cri.go:89] found id: ""
	I1208 01:57:17.679302 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.679312 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:17.679318 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:17.679379 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:17.703476 1136586 cri.go:89] found id: ""
	I1208 01:57:17.703504 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.703513 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:17.703519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:17.703579 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:17.732349 1136586 cri.go:89] found id: ""
	I1208 01:57:17.732377 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.732386 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:17.732393 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:17.732461 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:17.761008 1136586 cri.go:89] found id: ""
	I1208 01:57:17.761033 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.761042 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:17.761053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:17.761112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:17.789502 1136586 cri.go:89] found id: ""
	I1208 01:57:17.789527 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.789536 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:17.789543 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:17.789599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:17.814915 1136586 cri.go:89] found id: ""
	I1208 01:57:17.814938 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.814947 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:17.814958 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:17.814971 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:17.901464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:17.901483 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:17.901496 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:17.927699 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:17.927737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.956480 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:17.956506 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:18.016061 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:18.016103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.532462 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.543127 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:20.543203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:20.568124 1136586 cri.go:89] found id: ""
	I1208 01:57:20.568149 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.568158 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:20.568167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:20.568227 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:20.603985 1136586 cri.go:89] found id: ""
	I1208 01:57:20.604021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.604030 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:20.604037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:20.604106 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:20.636556 1136586 cri.go:89] found id: ""
	I1208 01:57:20.636588 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.636597 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:20.636603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:20.636671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:20.672751 1136586 cri.go:89] found id: ""
	I1208 01:57:20.672825 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.672860 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:20.672885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:20.672980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:20.701486 1136586 cri.go:89] found id: ""
	I1208 01:57:20.701557 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.701593 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:20.701617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:20.701708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:20.727838 1136586 cri.go:89] found id: ""
	I1208 01:57:20.727863 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.727873 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:20.727897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:20.727958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:20.757101 1136586 cri.go:89] found id: ""
	I1208 01:57:20.757126 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.757135 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:20.757142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:20.757204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:20.786936 1136586 cri.go:89] found id: ""
	I1208 01:57:20.786961 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.786970 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:20.786981 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:20.786995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.801478 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:20.801508 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:20.873983 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:20.874054 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:20.874087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:20.901450 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:20.901529 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:20.934263 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:20.934288 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.489851 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.500424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:23.500500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:23.526190 1136586 cri.go:89] found id: ""
	I1208 01:57:23.526216 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.526225 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:23.526232 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:23.526294 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:23.552764 1136586 cri.go:89] found id: ""
	I1208 01:57:23.552790 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.552799 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:23.552806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:23.552868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:23.577380 1136586 cri.go:89] found id: ""
	I1208 01:57:23.577406 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.577414 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:23.577421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:23.577481 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:23.608802 1136586 cri.go:89] found id: ""
	I1208 01:57:23.608830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.608839 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:23.608846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:23.608910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:23.634994 1136586 cri.go:89] found id: ""
	I1208 01:57:23.635020 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.635029 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:23.635035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:23.635096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:23.663236 1136586 cri.go:89] found id: ""
	I1208 01:57:23.663261 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.663270 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:23.663277 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:23.663350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:23.688872 1136586 cri.go:89] found id: ""
	I1208 01:57:23.688898 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.688907 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:23.688914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:23.688973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:23.714286 1136586 cri.go:89] found id: ""
	I1208 01:57:23.714312 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.714320 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:23.714329 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:23.714345 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:23.742945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:23.742972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.798260 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:23.798300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:23.813312 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:23.813340 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:23.892723 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:23.892748 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:23.892762 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.422664 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.433380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:26.433455 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:26.465015 1136586 cri.go:89] found id: ""
	I1208 01:57:26.465039 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.465048 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:26.465055 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:26.465113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:26.493403 1136586 cri.go:89] found id: ""
	I1208 01:57:26.493429 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.493438 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:26.493449 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:26.493537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:26.519773 1136586 cri.go:89] found id: ""
	I1208 01:57:26.519799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.519814 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:26.519821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:26.519883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:26.548992 1136586 cri.go:89] found id: ""
	I1208 01:57:26.549025 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.549037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:26.549047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:26.549127 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:26.574005 1136586 cri.go:89] found id: ""
	I1208 01:57:26.574031 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.574041 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:26.574047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:26.574111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:26.609416 1136586 cri.go:89] found id: ""
	I1208 01:57:26.609443 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.609452 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:26.609459 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:26.609517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:26.640996 1136586 cri.go:89] found id: ""
	I1208 01:57:26.641021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.641031 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:26.641037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:26.641096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:26.667832 1136586 cri.go:89] found id: ""
	I1208 01:57:26.667861 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.667870 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:26.667880 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:26.667911 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:26.727920 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:26.727958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:26.743134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:26.743167 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:26.805654 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:26.805676 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:26.805689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.833117 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:26.833153 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.374479 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.385263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:29.385343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:29.411850 1136586 cri.go:89] found id: ""
	I1208 01:57:29.411881 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.411890 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:29.411897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:29.411957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:29.436577 1136586 cri.go:89] found id: ""
	I1208 01:57:29.436650 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.436667 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:29.436674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:29.436741 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:29.461265 1136586 cri.go:89] found id: ""
	I1208 01:57:29.461287 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.461296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:29.461302 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:29.461375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:29.485998 1136586 cri.go:89] found id: ""
	I1208 01:57:29.486024 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.486033 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:29.486039 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:29.486102 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:29.515456 1136586 cri.go:89] found id: ""
	I1208 01:57:29.515482 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.515491 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:29.515498 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:29.515574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:29.540631 1136586 cri.go:89] found id: ""
	I1208 01:57:29.540658 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.540667 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:29.540674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:29.540771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:29.569112 1136586 cri.go:89] found id: ""
	I1208 01:57:29.569156 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.569182 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:29.569194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:29.569276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:29.601158 1136586 cri.go:89] found id: ""
	I1208 01:57:29.601182 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.601192 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:29.601201 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:29.601213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:29.681907 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:29.681933 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:29.681946 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:29.707746 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:29.707781 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.740008 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:29.740036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:29.795859 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:29.795893 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.311192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.322374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:32.322487 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:32.352628 1136586 cri.go:89] found id: ""
	I1208 01:57:32.352653 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.352662 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:32.352668 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:32.352727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:32.379283 1136586 cri.go:89] found id: ""
	I1208 01:57:32.379308 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.379317 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:32.379323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:32.379383 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:32.405884 1136586 cri.go:89] found id: ""
	I1208 01:57:32.405911 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.405919 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:32.405926 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:32.405985 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:32.431914 1136586 cri.go:89] found id: ""
	I1208 01:57:32.431939 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.431948 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:32.431958 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:32.432019 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:32.456763 1136586 cri.go:89] found id: ""
	I1208 01:57:32.456791 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.456799 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:32.456806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:32.456868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:32.482420 1136586 cri.go:89] found id: ""
	I1208 01:57:32.482467 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.482476 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:32.482483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:32.482550 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:32.507167 1136586 cri.go:89] found id: ""
	I1208 01:57:32.507201 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.507210 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:32.507218 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:32.507281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:32.532583 1136586 cri.go:89] found id: ""
	I1208 01:57:32.532612 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.532621 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:32.532630 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:32.532642 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:32.562135 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:32.562163 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:32.619510 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:32.619544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.636767 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:32.636845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:32.721264 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:32.721287 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:32.721300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:35.247026 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.260135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:35.260203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:35.288106 1136586 cri.go:89] found id: ""
	I1208 01:57:35.288130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.288138 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:35.288146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:35.288206 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:35.314646 1136586 cri.go:89] found id: ""
	I1208 01:57:35.314672 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.314682 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:35.314689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:35.314777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:35.342658 1136586 cri.go:89] found id: ""
	I1208 01:57:35.342685 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.342693 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:35.342700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:35.342762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:35.367839 1136586 cri.go:89] found id: ""
	I1208 01:57:35.367862 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.367870 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:35.367877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:35.367937 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:35.392345 1136586 cri.go:89] found id: ""
	I1208 01:57:35.392419 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.392449 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:35.392461 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:35.392525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:35.417214 1136586 cri.go:89] found id: ""
	I1208 01:57:35.417241 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.417250 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:35.417257 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:35.417318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:35.444512 1136586 cri.go:89] found id: ""
	I1208 01:57:35.444538 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.444546 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:35.444556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:35.444614 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:35.470153 1136586 cri.go:89] found id: ""
	I1208 01:57:35.470227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.470250 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:35.470272 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:35.470310 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:35.497905 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:35.497934 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:35.553331 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:35.553369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:35.568215 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:35.568246 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:35.665180 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:35.665205 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:35.665219 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.193386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.204636 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:38.204720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:38.230690 1136586 cri.go:89] found id: ""
	I1208 01:57:38.230717 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.230726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:38.230732 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:38.230791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:38.255363 1136586 cri.go:89] found id: ""
	I1208 01:57:38.255385 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.255394 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:38.255401 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:38.255460 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:38.282875 1136586 cri.go:89] found id: ""
	I1208 01:57:38.282899 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.282907 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:38.282914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:38.282980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:38.308397 1136586 cri.go:89] found id: ""
	I1208 01:57:38.308422 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.308437 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:38.308443 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:38.308505 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:38.334844 1136586 cri.go:89] found id: ""
	I1208 01:57:38.334871 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.334880 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:38.334886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:38.334945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:38.360635 1136586 cri.go:89] found id: ""
	I1208 01:57:38.360659 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.360669 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:38.360676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:38.360737 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:38.385673 1136586 cri.go:89] found id: ""
	I1208 01:57:38.385702 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.385710 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:38.385717 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:38.385776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:38.410525 1136586 cri.go:89] found id: ""
	I1208 01:57:38.410560 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.410569 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:38.410578 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:38.410589 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:38.467839 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:38.467874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:38.482720 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:38.482748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:38.547244 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:38.547268 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:38.547282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.573312 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:38.573350 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.116290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:41.132190 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:41.132273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:41.164024 1136586 cri.go:89] found id: ""
	I1208 01:57:41.164049 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.164058 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:41.164064 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:41.164126 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:41.190343 1136586 cri.go:89] found id: ""
	I1208 01:57:41.190380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.190390 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:41.190396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:41.190480 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:41.215567 1136586 cri.go:89] found id: ""
	I1208 01:57:41.215591 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.215600 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:41.215607 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:41.215712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:41.241307 1136586 cri.go:89] found id: ""
	I1208 01:57:41.241380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.241404 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:41.241424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:41.241510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:41.266598 1136586 cri.go:89] found id: ""
	I1208 01:57:41.266666 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.266682 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:41.266689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:41.266748 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:41.292745 1136586 cri.go:89] found id: ""
	I1208 01:57:41.292806 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.292833 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:41.292851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:41.292947 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:41.322477 1136586 cri.go:89] found id: ""
	I1208 01:57:41.322503 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.322528 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:41.322534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:41.322598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:41.348001 1136586 cri.go:89] found id: ""
	I1208 01:57:41.348028 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.348037 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:41.348047 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:41.348059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:41.413651 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:41.413677 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:41.413690 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:41.443591 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:41.443637 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.475807 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:41.475839 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:41.531946 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:41.531985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.047381 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.058560 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:44.058632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:44.086946 1136586 cri.go:89] found id: ""
	I1208 01:57:44.086974 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.086983 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:44.086990 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:44.087055 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:44.119808 1136586 cri.go:89] found id: ""
	I1208 01:57:44.119837 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.119846 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:44.119853 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:44.119914 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:44.151166 1136586 cri.go:89] found id: ""
	I1208 01:57:44.151189 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.151197 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:44.151204 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:44.151266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:44.179208 1136586 cri.go:89] found id: ""
	I1208 01:57:44.179232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.179240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:44.179247 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:44.179307 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:44.204931 1136586 cri.go:89] found id: ""
	I1208 01:57:44.204957 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.204967 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:44.204973 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:44.205086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:44.233222 1136586 cri.go:89] found id: ""
	I1208 01:57:44.233263 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.233289 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:44.233303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:44.233381 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:44.258112 1136586 cri.go:89] found id: ""
	I1208 01:57:44.258180 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.258204 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:44.258225 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:44.258301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:44.282317 1136586 cri.go:89] found id: ""
	I1208 01:57:44.282339 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.282348 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:44.282358 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:44.282369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:44.337431 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:44.337465 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.352560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:44.352633 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:44.416710 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:44.416734 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:44.416745 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:44.443231 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:44.443264 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:46.971715 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.982590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:46.982716 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:47.013622 1136586 cri.go:89] found id: ""
	I1208 01:57:47.013655 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.013665 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:47.013689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:47.013773 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:47.039262 1136586 cri.go:89] found id: ""
	I1208 01:57:47.039288 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.039298 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:47.039305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:47.039369 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:47.064571 1136586 cri.go:89] found id: ""
	I1208 01:57:47.064597 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.064606 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:47.064612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:47.064671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:47.103360 1136586 cri.go:89] found id: ""
	I1208 01:57:47.103428 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.103452 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:47.103471 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:47.103558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:47.137446 1136586 cri.go:89] found id: ""
	I1208 01:57:47.137514 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.137537 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:47.137556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:47.137643 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:47.167484 1136586 cri.go:89] found id: ""
	I1208 01:57:47.167507 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.167515 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:47.167522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:47.167581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:47.198040 1136586 cri.go:89] found id: ""
	I1208 01:57:47.198072 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.198082 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:47.198088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:47.198155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:47.222585 1136586 cri.go:89] found id: ""
	I1208 01:57:47.222609 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.222618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:47.222635 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:47.222648 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:47.253438 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:47.253468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:47.312655 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:47.312692 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:47.328066 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:47.328146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:47.396328 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:47.396351 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:47.396365 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:49.922587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.933241 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.933357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.957944 1136586 cri.go:89] found id: ""
	I1208 01:57:49.957967 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.957976 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.957983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.958043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.983531 1136586 cri.go:89] found id: ""
	I1208 01:57:49.983556 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.983565 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.983573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.983634 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:50.014921 1136586 cri.go:89] found id: ""
	I1208 01:57:50.014948 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.014958 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:50.014965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:50.015054 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:50.051300 1136586 cri.go:89] found id: ""
	I1208 01:57:50.051356 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.051365 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:50.051373 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:50.051439 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:50.078205 1136586 cri.go:89] found id: ""
	I1208 01:57:50.078232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.078242 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:50.078248 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:50.078313 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:50.116415 1136586 cri.go:89] found id: ""
	I1208 01:57:50.116472 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.116482 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:50.116489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:50.116549 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:50.152924 1136586 cri.go:89] found id: ""
	I1208 01:57:50.152953 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.152962 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:50.152971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:50.153034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:50.183266 1136586 cri.go:89] found id: ""
	I1208 01:57:50.183303 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.183313 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:50.183323 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:50.183339 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:50.219490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:50.219518 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:50.278125 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:50.278160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:50.293360 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:50.293392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:50.361099 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:50.361124 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:50.361137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:52.887762 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:52.898605 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:52.898684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:52.924862 1136586 cri.go:89] found id: ""
	I1208 01:57:52.924888 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.924898 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:52.924904 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:52.924967 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.953738 1136586 cri.go:89] found id: ""
	I1208 01:57:52.953766 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.953775 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.953781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.953841 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.979112 1136586 cri.go:89] found id: ""
	I1208 01:57:52.979135 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.979143 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.979156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.979220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:53.010105 1136586 cri.go:89] found id: ""
	I1208 01:57:53.010136 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.010146 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:53.010153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:53.010224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:53.040709 1136586 cri.go:89] found id: ""
	I1208 01:57:53.040737 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.040746 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:53.040759 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:53.040820 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:53.066591 1136586 cri.go:89] found id: ""
	I1208 01:57:53.066615 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.066624 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:53.066631 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:53.066690 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:53.103691 1136586 cri.go:89] found id: ""
	I1208 01:57:53.103721 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.103730 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:53.103737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:53.103796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:53.135825 1136586 cri.go:89] found id: ""
	I1208 01:57:53.135860 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.135869 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:53.135879 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:53.135892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:53.154871 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:53.154897 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:53.223770 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.223803 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:53.223818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:53.248879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:53.248912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:53.278989 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:53.279015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.836344 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:55.851014 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:55.851088 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:55.880945 1136586 cri.go:89] found id: ""
	I1208 01:57:55.880968 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.880977 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:55.880983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:55.881047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.918324 1136586 cri.go:89] found id: ""
	I1208 01:57:55.918348 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.918357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.918363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.918420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.943772 1136586 cri.go:89] found id: ""
	I1208 01:57:55.943799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.943808 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.943814 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.943872 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.968672 1136586 cri.go:89] found id: ""
	I1208 01:57:55.968695 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.968705 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.968711 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.968772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.993546 1136586 cri.go:89] found id: ""
	I1208 01:57:55.993573 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.993582 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.993588 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.993648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:56.026891 1136586 cri.go:89] found id: ""
	I1208 01:57:56.026916 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.026924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:56.026931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:56.026998 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:56.053302 1136586 cri.go:89] found id: ""
	I1208 01:57:56.053334 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.053344 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:56.053356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:56.053468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:56.079706 1136586 cri.go:89] found id: ""
	I1208 01:57:56.079733 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.079741 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:56.079750 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:56.079761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:56.142320 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:56.142357 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:56.157995 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:56.158067 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:56.221039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:56.221063 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:56.221077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:56.247019 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:56.247058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.775233 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:58.785596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:58.785682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:58.809955 1136586 cri.go:89] found id: ""
	I1208 01:57:58.809986 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.809996 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:58.810002 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:58.810061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:58.835423 1136586 cri.go:89] found id: ""
	I1208 01:57:58.835447 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.835456 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:58.835462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:58.835524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.867905 1136586 cri.go:89] found id: ""
	I1208 01:57:58.867928 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.867937 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.867943 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.868003 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.896767 1136586 cri.go:89] found id: ""
	I1208 01:57:58.896794 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.896803 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.896810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.896868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.926611 1136586 cri.go:89] found id: ""
	I1208 01:57:58.926633 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.926642 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.926648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.926707 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.954977 1136586 cri.go:89] found id: ""
	I1208 01:57:58.955001 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.955010 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.955016 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.955075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.984186 1136586 cri.go:89] found id: ""
	I1208 01:57:58.984209 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.984218 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.984224 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.984286 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:59.011291 1136586 cri.go:89] found id: ""
	I1208 01:57:59.011314 1136586 logs.go:282] 0 containers: []
	W1208 01:57:59.011323 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:59.011333 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:59.011346 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:59.067486 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:59.067520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:59.082307 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:59.082334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:59.162802 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:59.162826 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:59.162838 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:59.187405 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:59.187437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:01.720540 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:01.731197 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:01.731266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:01.756392 1136586 cri.go:89] found id: ""
	I1208 01:58:01.756414 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.756431 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:01.756438 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:01.756504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:01.782980 1136586 cri.go:89] found id: ""
	I1208 01:58:01.783050 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.783074 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:01.783099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:01.783180 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:01.808911 1136586 cri.go:89] found id: ""
	I1208 01:58:01.808947 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.808957 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:01.808964 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:01.809032 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.833417 1136586 cri.go:89] found id: ""
	I1208 01:58:01.833490 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.833514 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.833534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.833644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.863178 1136586 cri.go:89] found id: ""
	I1208 01:58:01.863255 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.863277 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.863296 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.863391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.893466 1136586 cri.go:89] found id: ""
	I1208 01:58:01.893540 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.893562 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.893582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.893669 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.927969 1136586 cri.go:89] found id: ""
	I1208 01:58:01.928046 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.928060 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.928067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.928137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.954102 1136586 cri.go:89] found id: ""
	I1208 01:58:01.954130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.954141 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.954150 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.954162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:02.011065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:02.011103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:02.028187 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:02.028220 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:02.092492 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:02.092518 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:02.092532 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:02.123344 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:02.123377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.657423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:04.669705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:04.669794 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:04.696818 1136586 cri.go:89] found id: ""
	I1208 01:58:04.696848 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.696857 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:04.696864 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:04.696973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:04.723929 1136586 cri.go:89] found id: ""
	I1208 01:58:04.723951 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.723960 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:04.723967 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:04.724028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:04.749688 1136586 cri.go:89] found id: ""
	I1208 01:58:04.749712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.749721 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:04.749727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:04.749790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:04.780181 1136586 cri.go:89] found id: ""
	I1208 01:58:04.780212 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.780223 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:04.780230 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:04.780310 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.805904 1136586 cri.go:89] found id: ""
	I1208 01:58:04.805930 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.805941 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.805947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.806004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.830657 1136586 cri.go:89] found id: ""
	I1208 01:58:04.830682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.830692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.830699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.830765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.870065 1136586 cri.go:89] found id: ""
	I1208 01:58:04.870130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.870152 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.870170 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.870263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.898118 1136586 cri.go:89] found id: ""
	I1208 01:58:04.898185 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.898207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.898228 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.898266 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.931407 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.931433 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.987787 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.987825 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:05.003245 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:05.003331 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:05.079158 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:05.079184 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:05.079196 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:07.607089 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:07.617881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:07.617954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:07.643290 1136586 cri.go:89] found id: ""
	I1208 01:58:07.643356 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.643378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:07.643396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:07.643483 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:07.668986 1136586 cri.go:89] found id: ""
	I1208 01:58:07.669054 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.669078 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:07.669099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:07.669190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:07.703052 1136586 cri.go:89] found id: ""
	I1208 01:58:07.703077 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.703086 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:07.703093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:07.703153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:07.730752 1136586 cri.go:89] found id: ""
	I1208 01:58:07.730780 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.730791 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:07.730801 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:07.730864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.757395 1136586 cri.go:89] found id: ""
	I1208 01:58:07.757420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.757429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.757442 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.757504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.781922 1136586 cri.go:89] found id: ""
	I1208 01:58:07.781946 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.781955 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.781961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.782020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.806746 1136586 cri.go:89] found id: ""
	I1208 01:58:07.806769 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.806778 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.806785 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.806855 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.835050 1136586 cri.go:89] found id: ""
	I1208 01:58:07.835079 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.835088 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.835097 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.835110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.898132 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.898165 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.918936 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.918964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.984291 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.984315 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:07.984328 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:08.010075 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:08.010113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.540471 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:10.551266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:10.551338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:10.577175 1136586 cri.go:89] found id: ""
	I1208 01:58:10.577202 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.577212 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:10.577219 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:10.577281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:10.602532 1136586 cri.go:89] found id: ""
	I1208 01:58:10.602567 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.602577 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:10.602584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:10.602646 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:10.628758 1136586 cri.go:89] found id: ""
	I1208 01:58:10.628782 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.628790 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:10.628796 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:10.628860 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:10.658744 1136586 cri.go:89] found id: ""
	I1208 01:58:10.658767 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.658776 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:10.658783 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:10.658848 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:10.687442 1136586 cri.go:89] found id: ""
	I1208 01:58:10.687466 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.687475 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:10.687483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:10.687547 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.713454 1136586 cri.go:89] found id: ""
	I1208 01:58:10.713527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.713551 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.713573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.713662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.738872 1136586 cri.go:89] found id: ""
	I1208 01:58:10.738896 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.738905 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.738912 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.739073 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.764935 1136586 cri.go:89] found id: ""
	I1208 01:58:10.764962 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.764972 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.764981 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.764995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.822530 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.822568 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.837607 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.837635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.917003 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:10.917024 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:10.917036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:10.943077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.943113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.473561 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:13.484592 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:13.484660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:13.511440 1136586 cri.go:89] found id: ""
	I1208 01:58:13.511463 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.511472 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:13.511478 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:13.511541 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:13.535634 1136586 cri.go:89] found id: ""
	I1208 01:58:13.535659 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.535668 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:13.535675 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:13.535734 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:13.560688 1136586 cri.go:89] found id: ""
	I1208 01:58:13.560712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.560720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:13.560727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:13.560791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:13.586137 1136586 cri.go:89] found id: ""
	I1208 01:58:13.586217 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.586240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:13.586261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:13.586354 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:13.612353 1136586 cri.go:89] found id: ""
	I1208 01:58:13.612378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.612388 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:13.612394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:13.612466 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:13.642171 1136586 cri.go:89] found id: ""
	I1208 01:58:13.642198 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.642208 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:13.642215 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:13.642276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:13.668409 1136586 cri.go:89] found id: ""
	I1208 01:58:13.668440 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.668448 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:13.668455 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:13.668537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.701198 1136586 cri.go:89] found id: ""
	I1208 01:58:13.701223 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.701232 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.701240 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.701252 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.758303 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.758338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.773305 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.773343 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.842494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.842521 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:13.842537 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:13.871092 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.871129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.410612 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:16.421252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:16.421335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:16.448846 1136586 cri.go:89] found id: ""
	I1208 01:58:16.448872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.448880 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:16.448887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:16.448954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:16.478943 1136586 cri.go:89] found id: ""
	I1208 01:58:16.478968 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.478977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:16.478984 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:16.479044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:16.504203 1136586 cri.go:89] found id: ""
	I1208 01:58:16.504230 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.504239 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:16.504245 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:16.504305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:16.531210 1136586 cri.go:89] found id: ""
	I1208 01:58:16.531238 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.531247 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:16.531254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:16.531343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:16.561091 1136586 cri.go:89] found id: ""
	I1208 01:58:16.561122 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.561130 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:16.561137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:16.561199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:16.586402 1136586 cri.go:89] found id: ""
	I1208 01:58:16.586427 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.586435 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:16.586462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:16.586524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:16.611837 1136586 cri.go:89] found id: ""
	I1208 01:58:16.611863 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.611873 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:16.611879 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:16.611961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:16.637357 1136586 cri.go:89] found id: ""
	I1208 01:58:16.637399 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.637408 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:16.637434 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.637468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.692659 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.692739 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:16.709626 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:16.709655 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:16.785738 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.785761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:16.785774 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:16.811061 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.811096 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:19.346587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:19.359091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:19.359159 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:19.395510 1136586 cri.go:89] found id: ""
	I1208 01:58:19.395536 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.395545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:19.395551 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:19.395609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:19.423019 1136586 cri.go:89] found id: ""
	I1208 01:58:19.423044 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.423053 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:19.423059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:19.423120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:19.449460 1136586 cri.go:89] found id: ""
	I1208 01:58:19.449487 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.449496 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:19.449503 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:19.449574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:19.476285 1136586 cri.go:89] found id: ""
	I1208 01:58:19.476311 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.476320 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:19.476327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:19.476387 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:19.504576 1136586 cri.go:89] found id: ""
	I1208 01:58:19.504603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.504613 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:19.504620 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:19.504682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:19.530968 1136586 cri.go:89] found id: ""
	I1208 01:58:19.530994 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.531015 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:19.531023 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:19.531092 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:19.555468 1136586 cri.go:89] found id: ""
	I1208 01:58:19.555492 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.555501 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:19.555508 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:19.555571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:19.580667 1136586 cri.go:89] found id: ""
	I1208 01:58:19.580703 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.580716 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:19.580726 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:19.580737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.638717 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.638754 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.653903 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.653935 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.721039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.721058 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:19.721071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:19.747016 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.747054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.280191 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:22.290698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:22.290771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:22.319983 1136586 cri.go:89] found id: ""
	I1208 01:58:22.320007 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.320016 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:22.320022 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:22.320084 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:22.349912 1136586 cri.go:89] found id: ""
	I1208 01:58:22.349939 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.349949 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:22.349955 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:22.350016 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:22.381227 1136586 cri.go:89] found id: ""
	I1208 01:58:22.381253 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.381262 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:22.381269 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:22.381327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:22.412055 1136586 cri.go:89] found id: ""
	I1208 01:58:22.412130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.412143 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:22.412150 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:22.412219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:22.437094 1136586 cri.go:89] found id: ""
	I1208 01:58:22.437169 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.437193 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:22.437214 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:22.437338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:22.466779 1136586 cri.go:89] found id: ""
	I1208 01:58:22.466809 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.466817 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:22.466824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:22.466888 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:22.492472 1136586 cri.go:89] found id: ""
	I1208 01:58:22.492555 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.492580 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:22.492599 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:22.492683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:22.517817 1136586 cri.go:89] found id: ""
	I1208 01:58:22.517865 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.517875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:22.517884 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:22.517896 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:22.533468 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:22.533495 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.600107 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.600132 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:22.600145 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:22.625768 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.625805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.654249 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:22.654334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.216756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:25.228093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:25.228171 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:25.254793 1136586 cri.go:89] found id: ""
	I1208 01:58:25.254820 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.254840 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:25.254848 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:25.254911 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:25.280729 1136586 cri.go:89] found id: ""
	I1208 01:58:25.280756 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.280765 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:25.280772 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:25.280856 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:25.306714 1136586 cri.go:89] found id: ""
	I1208 01:58:25.306786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.306802 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:25.306809 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:25.306883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:25.333920 1136586 cri.go:89] found id: ""
	I1208 01:58:25.333955 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.333964 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:25.333971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:25.334044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:25.361369 1136586 cri.go:89] found id: ""
	I1208 01:58:25.361396 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.361405 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:25.361412 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:25.361486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:25.392931 1136586 cri.go:89] found id: ""
	I1208 01:58:25.392958 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.392967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:25.392974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:25.393046 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:25.423143 1136586 cri.go:89] found id: ""
	I1208 01:58:25.423168 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.423177 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:25.423183 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:25.423245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:25.452795 1136586 cri.go:89] found id: ""
	I1208 01:58:25.452872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.452888 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:25.452899 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:25.452913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:25.479544 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.479585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:25.510747 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:25.510777 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.566401 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:25.566437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:25.581786 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:25.581816 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:25.653146 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.153984 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:28.164723 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:28.164793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:28.188760 1136586 cri.go:89] found id: ""
	I1208 01:58:28.188786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.188796 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:28.188803 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:28.188865 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:28.213011 1136586 cri.go:89] found id: ""
	I1208 01:58:28.213037 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.213046 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:28.213053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:28.213114 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:28.237473 1136586 cri.go:89] found id: ""
	I1208 01:58:28.237547 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.237559 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:28.237566 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:28.237692 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:28.264353 1136586 cri.go:89] found id: ""
	I1208 01:58:28.264378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.264387 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:28.264394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:28.264478 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:28.289216 1136586 cri.go:89] found id: ""
	I1208 01:58:28.289250 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.289259 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:28.289265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:28.289332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:28.314397 1136586 cri.go:89] found id: ""
	I1208 01:58:28.314431 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.314440 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:28.314480 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:28.314553 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:28.339256 1136586 cri.go:89] found id: ""
	I1208 01:58:28.339290 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.339299 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:28.339305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:28.339372 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:28.376790 1136586 cri.go:89] found id: ""
	I1208 01:58:28.376824 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.376833 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:28.376842 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:28.376854 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:28.412562 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:28.412597 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:28.468784 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:28.468818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:28.483513 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:28.483539 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:28.548999 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.549069 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:28.549088 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.074358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:31.085483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:31.085557 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:31.113378 1136586 cri.go:89] found id: ""
	I1208 01:58:31.113404 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.113413 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:31.113419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:31.113486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:31.151500 1136586 cri.go:89] found id: ""
	I1208 01:58:31.151527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.151537 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:31.151544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:31.151606 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:31.198664 1136586 cri.go:89] found id: ""
	I1208 01:58:31.198692 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.198701 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:31.198708 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:31.198770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:31.225073 1136586 cri.go:89] found id: ""
	I1208 01:58:31.225100 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.225109 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:31.225115 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:31.225178 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:31.253221 1136586 cri.go:89] found id: ""
	I1208 01:58:31.253248 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.253256 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:31.253263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:31.253328 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:31.278685 1136586 cri.go:89] found id: ""
	I1208 01:58:31.278715 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.278724 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:31.278731 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:31.278793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:31.308014 1136586 cri.go:89] found id: ""
	I1208 01:58:31.308040 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.308050 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:31.308057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:31.308118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:31.333618 1136586 cri.go:89] found id: ""
	I1208 01:58:31.333646 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.333655 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:31.333666 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:31.333677 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.360688 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:31.360767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:31.400673 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:31.400748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:31.458405 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:31.458467 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:31.473371 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:31.473403 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:31.535352 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.035643 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:34.047071 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:34.047236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:34.072671 1136586 cri.go:89] found id: ""
	I1208 01:58:34.072696 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.072705 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:34.072712 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:34.072776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:34.102807 1136586 cri.go:89] found id: ""
	I1208 01:58:34.102835 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.102844 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:34.102851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:34.102910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:34.129970 1136586 cri.go:89] found id: ""
	I1208 01:58:34.129998 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.130007 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:34.130017 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:34.130077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:34.156982 1136586 cri.go:89] found id: ""
	I1208 01:58:34.157009 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.157019 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:34.157026 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:34.157086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:34.181976 1136586 cri.go:89] found id: ""
	I1208 01:58:34.182003 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.182013 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:34.182020 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:34.182081 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:34.206537 1136586 cri.go:89] found id: ""
	I1208 01:58:34.206615 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.206630 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:34.206638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:34.206699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:34.236167 1136586 cri.go:89] found id: ""
	I1208 01:58:34.236192 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.236201 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:34.236210 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:34.236270 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:34.262308 1136586 cri.go:89] found id: ""
	I1208 01:58:34.262332 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.262341 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:34.262351 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:34.262363 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:34.317558 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:34.317593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:34.332448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:34.332475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:34.412027 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.412050 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:34.412062 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:34.438062 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:34.438097 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.967795 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.978660 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.978730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:37.012757 1136586 cri.go:89] found id: ""
	I1208 01:58:37.012787 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.012797 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:37.012804 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:37.012878 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:37.041663 1136586 cri.go:89] found id: ""
	I1208 01:58:37.041685 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.041693 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:37.041700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:37.041758 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:37.066610 1136586 cri.go:89] found id: ""
	I1208 01:58:37.066694 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.066716 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:37.066734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:37.066844 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:37.094085 1136586 cri.go:89] found id: ""
	I1208 01:58:37.094162 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.094187 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:37.094209 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:37.094319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:37.132780 1136586 cri.go:89] found id: ""
	I1208 01:58:37.132864 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.132886 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:37.132905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:37.133017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:37.169263 1136586 cri.go:89] found id: ""
	I1208 01:58:37.169340 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.169365 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:37.169386 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:37.169498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:37.194196 1136586 cri.go:89] found id: ""
	I1208 01:58:37.194275 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.194300 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:37.194319 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:37.194404 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:37.219299 1136586 cri.go:89] found id: ""
	I1208 01:58:37.219378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.219415 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:37.219442 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:37.219469 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:37.274745 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:37.274782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:37.289751 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:37.289779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:37.363255 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:37.363297 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:37.363316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:37.401496 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:37.401554 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:39.942202 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:39.953239 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.953312 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.978920 1136586 cri.go:89] found id: ""
	I1208 01:58:39.978943 1136586 logs.go:282] 0 containers: []
	W1208 01:58:39.978952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.978959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.979017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:40.025284 1136586 cri.go:89] found id: ""
	I1208 01:58:40.025316 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.025343 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:40.025352 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:40.025427 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:40.067843 1136586 cri.go:89] found id: ""
	I1208 01:58:40.067869 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.067879 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:40.067886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:40.067952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:40.102669 1136586 cri.go:89] found id: ""
	I1208 01:58:40.102759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.102785 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:40.102806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:40.102923 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:40.150768 1136586 cri.go:89] found id: ""
	I1208 01:58:40.150799 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.150809 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:40.150815 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:40.150881 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:40.179334 1136586 cri.go:89] found id: ""
	I1208 01:58:40.179362 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.179373 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:40.179382 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:40.179453 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:40.208035 1136586 cri.go:89] found id: ""
	I1208 01:58:40.208063 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.208072 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:40.208079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:40.208144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:40.238244 1136586 cri.go:89] found id: ""
	I1208 01:58:40.238286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.238296 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:40.238306 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:40.238320 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:40.264240 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:40.264279 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:40.295875 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:40.295900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:40.355993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:40.356087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:40.374494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:40.374575 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:40.448504 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.948778 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.959677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.959745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.984449 1136586 cri.go:89] found id: ""
	I1208 01:58:42.984474 1136586 logs.go:282] 0 containers: []
	W1208 01:58:42.984483 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.984489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.984555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:43.015138 1136586 cri.go:89] found id: ""
	I1208 01:58:43.015163 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.015172 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:43.015178 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:43.015242 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:43.040581 1136586 cri.go:89] found id: ""
	I1208 01:58:43.040608 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.040617 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:43.040623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:43.040685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:43.066316 1136586 cri.go:89] found id: ""
	I1208 01:58:43.066345 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.066367 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:43.066374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:43.066484 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:43.095034 1136586 cri.go:89] found id: ""
	I1208 01:58:43.095062 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.095071 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:43.095077 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:43.095137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:43.129297 1136586 cri.go:89] found id: ""
	I1208 01:58:43.129323 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.129333 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:43.129340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:43.129413 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:43.160843 1136586 cri.go:89] found id: ""
	I1208 01:58:43.160912 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.160929 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:43.160937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:43.161012 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:43.189017 1136586 cri.go:89] found id: ""
	I1208 01:58:43.189043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.189051 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:43.189060 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:43.189071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:43.245153 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:43.245189 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:43.260337 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:43.260380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:43.329966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:43.329985 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:43.329998 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:43.357975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:43.358058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.892416 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.902821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.902893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.929257 1136586 cri.go:89] found id: ""
	I1208 01:58:45.929283 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.929292 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.929299 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.929357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.954817 1136586 cri.go:89] found id: ""
	I1208 01:58:45.954851 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.954861 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.954867 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.954928 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.980153 1136586 cri.go:89] found id: ""
	I1208 01:58:45.980183 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.980196 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.980202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.980263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:46.009369 1136586 cri.go:89] found id: ""
	I1208 01:58:46.009398 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.009408 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:46.009415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:46.009555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:46.035686 1136586 cri.go:89] found id: ""
	I1208 01:58:46.035713 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.035736 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:46.035743 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:46.035815 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:46.065295 1136586 cri.go:89] found id: ""
	I1208 01:58:46.065327 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.065337 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:46.065344 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:46.065414 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:46.104678 1136586 cri.go:89] found id: ""
	I1208 01:58:46.104746 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.104769 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:46.104790 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:46.104877 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:46.134606 1136586 cri.go:89] found id: ""
	I1208 01:58:46.134682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.134705 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:46.134727 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:46.134766 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:46.198135 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:46.198171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:46.213155 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:46.213180 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:46.287421 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:46.287443 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:46.287456 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:46.313370 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:46.313405 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:48.849489 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.861044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.861117 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.886203 1136586 cri.go:89] found id: ""
	I1208 01:58:48.886227 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.886237 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.886243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.886305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.911152 1136586 cri.go:89] found id: ""
	I1208 01:58:48.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.911187 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.911193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.911275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.935595 1136586 cri.go:89] found id: ""
	I1208 01:58:48.935620 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.935629 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.935635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.935750 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.959533 1136586 cri.go:89] found id: ""
	I1208 01:58:48.959558 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.959566 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.959573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.959631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.985031 1136586 cri.go:89] found id: ""
	I1208 01:58:48.985057 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.985066 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.985073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.985176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:49.014577 1136586 cri.go:89] found id: ""
	I1208 01:58:49.014603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.014612 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:49.014619 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:49.014679 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:49.038952 1136586 cri.go:89] found id: ""
	I1208 01:58:49.038978 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.038987 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:49.038993 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:49.039051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:49.063733 1136586 cri.go:89] found id: ""
	I1208 01:58:49.063759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.063768 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:49.063777 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:49.063788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:49.097818 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:49.097852 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:49.161476 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:49.161513 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:49.178959 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:49.178995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:49.243404 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:49.243465 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:49.243502 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:51.768803 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.780779 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.780851 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.808733 1136586 cri.go:89] found id: ""
	I1208 01:58:51.808759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.808768 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.808775 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.808846 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.835560 1136586 cri.go:89] found id: ""
	I1208 01:58:51.835587 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.835599 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.835606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.835670 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.860461 1136586 cri.go:89] found id: ""
	I1208 01:58:51.860485 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.860494 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.860501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.860562 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.885253 1136586 cri.go:89] found id: ""
	I1208 01:58:51.885286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.885294 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.885303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.885373 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.909393 1136586 cri.go:89] found id: ""
	I1208 01:58:51.909420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.909429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.909436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.909498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.934211 1136586 cri.go:89] found id: ""
	I1208 01:58:51.934245 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.934254 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.934261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.934331 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.958861 1136586 cri.go:89] found id: ""
	I1208 01:58:51.958887 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.958896 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.958903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.958961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.984069 1136586 cri.go:89] found id: ""
	I1208 01:58:51.984095 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.984106 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.984115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.984146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:51.999081 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.999109 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:52.068304 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:52.068327 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:52.068341 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:52.094374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:52.094481 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:52.127916 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:52.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.695208 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.706109 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.706218 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.731787 1136586 cri.go:89] found id: ""
	I1208 01:58:54.731814 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.731823 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.731835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.731895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.760606 1136586 cri.go:89] found id: ""
	I1208 01:58:54.760631 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.760639 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.760646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.760706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.786598 1136586 cri.go:89] found id: ""
	I1208 01:58:54.786626 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.786635 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.786641 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.786699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.816536 1136586 cri.go:89] found id: ""
	I1208 01:58:54.816562 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.816572 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.816579 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.816641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.845022 1136586 cri.go:89] found id: ""
	I1208 01:58:54.845048 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.845056 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.845063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.845125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.870700 1136586 cri.go:89] found id: ""
	I1208 01:58:54.870725 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.870734 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.870741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.870799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.899897 1136586 cri.go:89] found id: ""
	I1208 01:58:54.899923 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.899934 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.899941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.900002 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.928551 1136586 cri.go:89] found id: ""
	I1208 01:58:54.928575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.928584 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.928593 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.928606 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.991743 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.991769 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:54.991782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:55.022605 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:55.022696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:55.052018 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:55.052044 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:55.112862 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:55.112979 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.628955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.639865 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.639964 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.667931 1136586 cri.go:89] found id: ""
	I1208 01:58:57.667954 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.667962 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.667969 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.668039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.696303 1136586 cri.go:89] found id: ""
	I1208 01:58:57.696328 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.696337 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.696343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.696402 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.720015 1136586 cri.go:89] found id: ""
	I1208 01:58:57.720043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.720052 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.720059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.720120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.748838 1136586 cri.go:89] found id: ""
	I1208 01:58:57.748910 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.748934 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.748953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.749033 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.776554 1136586 cri.go:89] found id: ""
	I1208 01:58:57.776575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.776584 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.776591 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.776648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.800791 1136586 cri.go:89] found id: ""
	I1208 01:58:57.800815 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.800823 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.800830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.800904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.825904 1136586 cri.go:89] found id: ""
	I1208 01:58:57.825975 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.825998 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.826021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.826157 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.853294 1136586 cri.go:89] found id: ""
	I1208 01:58:57.853318 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.853327 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.853336 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.853348 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.868267 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.868292 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.934230 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.934259 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:57.934274 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:57.960735 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.960767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:57.989741 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.989770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.546140 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.557379 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.557497 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.583568 1136586 cri.go:89] found id: ""
	I1208 01:59:00.583595 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.583605 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.583611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.583695 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.615812 1136586 cri.go:89] found id: ""
	I1208 01:59:00.615838 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.615847 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.615856 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.615924 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.642865 1136586 cri.go:89] found id: ""
	I1208 01:59:00.642905 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.642914 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.642921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.642991 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.669343 1136586 cri.go:89] found id: ""
	I1208 01:59:00.669418 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.669434 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.669441 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.669501 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.695611 1136586 cri.go:89] found id: ""
	I1208 01:59:00.695688 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.695702 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.695709 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.695774 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.721947 1136586 cri.go:89] found id: ""
	I1208 01:59:00.721974 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.721983 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.721989 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.722059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.747456 1136586 cri.go:89] found id: ""
	I1208 01:59:00.747485 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.747493 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.747500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.747567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.774802 1136586 cri.go:89] found id: ""
	I1208 01:59:00.774868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.774884 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.774894 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.774906 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.832246 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.832282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.847202 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.847231 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.912820 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.912843 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:00.912856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:00.938649 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.938689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.468247 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.479180 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.479248 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.503843 1136586 cri.go:89] found id: ""
	I1208 01:59:03.503868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.503877 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.503884 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.503946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.533070 1136586 cri.go:89] found id: ""
	I1208 01:59:03.533092 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.533101 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.533107 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.533173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.560639 1136586 cri.go:89] found id: ""
	I1208 01:59:03.560662 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.560670 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.560677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.560738 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.589123 1136586 cri.go:89] found id: ""
	I1208 01:59:03.589150 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.589159 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.589165 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.589225 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.620870 1136586 cri.go:89] found id: ""
	I1208 01:59:03.620893 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.620902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.620908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.620966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.648582 1136586 cri.go:89] found id: ""
	I1208 01:59:03.648607 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.648616 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.648623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.648688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.676092 1136586 cri.go:89] found id: ""
	I1208 01:59:03.676117 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.676125 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.676131 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.676193 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.704985 1136586 cri.go:89] found id: ""
	I1208 01:59:03.705012 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.705021 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.705031 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.705048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.762437 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.762476 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.777354 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.777423 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.852604 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.852630 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:03.852644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:03.877929 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.877964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.407680 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.418391 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.418489 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.448290 1136586 cri.go:89] found id: ""
	I1208 01:59:06.448312 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.448321 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.448327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.448386 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.473926 1136586 cri.go:89] found id: ""
	I1208 01:59:06.473958 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.473967 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.473974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.474037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.499614 1136586 cri.go:89] found id: ""
	I1208 01:59:06.499640 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.499649 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.499656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.499717 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.526871 1136586 cri.go:89] found id: ""
	I1208 01:59:06.526895 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.526904 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.526910 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.526970 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.551675 1136586 cri.go:89] found id: ""
	I1208 01:59:06.551706 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.551716 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.551722 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.551797 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.576680 1136586 cri.go:89] found id: ""
	I1208 01:59:06.576705 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.576714 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.576724 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.576784 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.613884 1136586 cri.go:89] found id: ""
	I1208 01:59:06.613921 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.613930 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.613939 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.614010 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.642583 1136586 cri.go:89] found id: ""
	I1208 01:59:06.642619 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.642629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.642638 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.642650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.709864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.709936 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:06.709962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:06.739423 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.739463 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.767654 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.767684 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.826250 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.826285 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.342623 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.355321 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.355406 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.392040 1136586 cri.go:89] found id: ""
	I1208 01:59:09.392067 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.392080 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.392091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.392161 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.420346 1136586 cri.go:89] found id: ""
	I1208 01:59:09.420372 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.420381 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.420387 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.420454 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.446119 1136586 cri.go:89] found id: ""
	I1208 01:59:09.446145 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.446154 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.446161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.446224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.470836 1136586 cri.go:89] found id: ""
	I1208 01:59:09.470859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.470867 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.470873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.470930 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.495896 1136586 cri.go:89] found id: ""
	I1208 01:59:09.495964 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.495988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.496000 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.496076 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.521109 1136586 cri.go:89] found id: ""
	I1208 01:59:09.521136 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.521145 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.521151 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.521211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.551629 1136586 cri.go:89] found id: ""
	I1208 01:59:09.551652 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.551668 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.551676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.551740 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.577446 1136586 cri.go:89] found id: ""
	I1208 01:59:09.577472 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.577481 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.577490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.577500 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.641466 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.641501 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.657574 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.657600 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.724794 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.724818 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:09.724830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:09.749729 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.749761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.285155 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.296049 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.296118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.325857 1136586 cri.go:89] found id: ""
	I1208 01:59:12.325891 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.325900 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.325907 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.325992 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.363392 1136586 cri.go:89] found id: ""
	I1208 01:59:12.363419 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.363428 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.363434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.363499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.392776 1136586 cri.go:89] found id: ""
	I1208 01:59:12.392803 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.392812 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.392817 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.392884 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.418895 1136586 cri.go:89] found id: ""
	I1208 01:59:12.418919 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.418928 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.418935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.418994 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.444923 1136586 cri.go:89] found id: ""
	I1208 01:59:12.444947 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.444960 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.444966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.445087 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.471912 1136586 cri.go:89] found id: ""
	I1208 01:59:12.471982 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.472006 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.472019 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.472093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.496844 1136586 cri.go:89] found id: ""
	I1208 01:59:12.496877 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.496886 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.496892 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.496966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.523601 1136586 cri.go:89] found id: ""
	I1208 01:59:12.523626 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.523635 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.523645 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.523656 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.581608 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.581646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.598560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.598638 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.666409 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.666430 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:12.666474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:12.692286 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.692321 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.220645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.234496 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.234563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.259957 1136586 cri.go:89] found id: ""
	I1208 01:59:15.259981 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.259991 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.259997 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.260059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.285880 1136586 cri.go:89] found id: ""
	I1208 01:59:15.285906 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.285915 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.285921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.285982 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.311506 1136586 cri.go:89] found id: ""
	I1208 01:59:15.311533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.311545 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.311552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.311615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.336490 1136586 cri.go:89] found id: ""
	I1208 01:59:15.336515 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.336524 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.336531 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.336590 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.365039 1136586 cri.go:89] found id: ""
	I1208 01:59:15.365064 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.365073 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.365079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.365143 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.399712 1136586 cri.go:89] found id: ""
	I1208 01:59:15.399740 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.399749 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.399756 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.399821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.427492 1136586 cri.go:89] found id: ""
	I1208 01:59:15.427517 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.427527 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.427533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.427599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.453022 1136586 cri.go:89] found id: ""
	I1208 01:59:15.453050 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.453059 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.453068 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.453081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.468204 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.468283 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.533761 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.533785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:15.533801 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:15.558879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.558914 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.593769 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.593794 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.158848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.169444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.169517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.195546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.195572 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.195581 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.195587 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.195649 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.220906 1136586 cri.go:89] found id: ""
	I1208 01:59:18.220928 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.220942 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.220948 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.221008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.248546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.248574 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.248584 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.248590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.248652 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.273450 1136586 cri.go:89] found id: ""
	I1208 01:59:18.273477 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.273486 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.273492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.273558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.298830 1136586 cri.go:89] found id: ""
	I1208 01:59:18.298857 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.298867 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.298874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.298936 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.328161 1136586 cri.go:89] found id: ""
	I1208 01:59:18.328182 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.328191 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.328198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.328258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.369715 1136586 cri.go:89] found id: ""
	I1208 01:59:18.369747 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.369756 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.369763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.369822 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.400838 1136586 cri.go:89] found id: ""
	I1208 01:59:18.400865 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.400874 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.400883 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:18.400913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:18.429677 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.429711 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.462210 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.462239 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.517535 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.517571 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.533236 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.533267 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.604338 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.106017 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.116977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.117060 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.145425 1136586 cri.go:89] found id: ""
	I1208 01:59:21.145503 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.145526 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.145544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.145633 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.169097 1136586 cri.go:89] found id: ""
	I1208 01:59:21.169125 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.169134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.169140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.169205 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.195045 1136586 cri.go:89] found id: ""
	I1208 01:59:21.195071 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.195081 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.195088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.195153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.221094 1136586 cri.go:89] found id: ""
	I1208 01:59:21.221128 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.221137 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.221144 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.221213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.247434 1136586 cri.go:89] found id: ""
	I1208 01:59:21.247457 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.247466 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.247472 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.247531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.278610 1136586 cri.go:89] found id: ""
	I1208 01:59:21.278633 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.278642 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.278648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.278712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.304567 1136586 cri.go:89] found id: ""
	I1208 01:59:21.304638 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.304654 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.304662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.304731 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.331211 1136586 cri.go:89] found id: ""
	I1208 01:59:21.331281 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.331304 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.331324 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.331355 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.392474 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.392509 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.413166 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.413192 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.491167 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.491190 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:21.491204 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:21.516454 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.516487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:24.050552 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:24.061833 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:24.061907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:24.089336 1136586 cri.go:89] found id: ""
	I1208 01:59:24.089363 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.089372 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:24.089380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:24.089442 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.115231 1136586 cri.go:89] found id: ""
	I1208 01:59:24.115256 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.115265 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.115272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.115347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.141479 1136586 cri.go:89] found id: ""
	I1208 01:59:24.141505 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.141515 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.141522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.141580 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.166759 1136586 cri.go:89] found id: ""
	I1208 01:59:24.166786 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.166795 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.166802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.166862 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.191431 1136586 cri.go:89] found id: ""
	I1208 01:59:24.191453 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.191462 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.191468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.191525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.216578 1136586 cri.go:89] found id: ""
	I1208 01:59:24.216618 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.216628 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.216635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.216708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.242316 1136586 cri.go:89] found id: ""
	I1208 01:59:24.242343 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.242352 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.242358 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.242420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.267328 1136586 cri.go:89] found id: ""
	I1208 01:59:24.267355 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.267365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.267375 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.267386 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.322866 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.322901 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.337393 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.337420 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.422627 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.422649 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:24.422662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:24.447517 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.447551 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.974915 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.985831 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.985904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:27.015934 1136586 cri.go:89] found id: ""
	I1208 01:59:27.015960 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.015970 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:27.015977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:27.016043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:27.042350 1136586 cri.go:89] found id: ""
	I1208 01:59:27.042376 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.042386 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:27.042400 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:27.042482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:27.068981 1136586 cri.go:89] found id: ""
	I1208 01:59:27.069007 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.069015 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:27.069021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:27.069086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.097058 1136586 cri.go:89] found id: ""
	I1208 01:59:27.097086 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.097095 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.097105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.097168 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.127221 1136586 cri.go:89] found id: ""
	I1208 01:59:27.127245 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.127253 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.127260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.127318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.152834 1136586 cri.go:89] found id: ""
	I1208 01:59:27.152859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.152869 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.152875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.152942 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.185563 1136586 cri.go:89] found id: ""
	I1208 01:59:27.185591 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.185600 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.185606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.185667 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.213022 1136586 cri.go:89] found id: ""
	I1208 01:59:27.213099 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.213125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.213147 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.213183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.272193 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.272229 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.289811 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.289892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.364663 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:27.364695 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:27.364720 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:27.392211 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.392286 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:29.931677 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.942629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.942709 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.971856 1136586 cri.go:89] found id: ""
	I1208 01:59:29.971882 1136586 logs.go:282] 0 containers: []
	W1208 01:59:29.971891 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.971898 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.971958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:30.000222 1136586 cri.go:89] found id: ""
	I1208 01:59:30.000248 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.000258 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:30.000265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:30.000330 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:30.039259 1136586 cri.go:89] found id: ""
	I1208 01:59:30.039285 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.039295 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:30.039301 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:30.039370 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:30.096203 1136586 cri.go:89] found id: ""
	I1208 01:59:30.096247 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.096258 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:30.096265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:30.096348 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.125007 1136586 cri.go:89] found id: ""
	I1208 01:59:30.125034 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.125044 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.125051 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.125138 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.155888 1136586 cri.go:89] found id: ""
	I1208 01:59:30.155914 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.155924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.155931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.155996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.183068 1136586 cri.go:89] found id: ""
	I1208 01:59:30.183104 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.183114 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.183121 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.183186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.211552 1136586 cri.go:89] found id: ""
	I1208 01:59:30.211577 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.211585 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.211601 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:30.211613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:30.238738 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.238789 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.272245 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.272275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.331871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.331909 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:30.349711 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.349742 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.428964 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:32.929192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.940100 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.940183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.963581 1136586 cri.go:89] found id: ""
	I1208 01:59:32.963602 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.963611 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.963617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.963678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.992028 1136586 cri.go:89] found id: ""
	I1208 01:59:32.992054 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.992063 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.992069 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.992130 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:33.023809 1136586 cri.go:89] found id: ""
	I1208 01:59:33.023836 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.023846 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:33.023852 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:33.023919 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:33.048510 1136586 cri.go:89] found id: ""
	I1208 01:59:33.048533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.048541 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:33.048548 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:33.048608 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:33.075068 1136586 cri.go:89] found id: ""
	I1208 01:59:33.075096 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.075106 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:33.075113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:33.075173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.099238 1136586 cri.go:89] found id: ""
	I1208 01:59:33.099264 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.099273 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.099280 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.099345 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.123805 1136586 cri.go:89] found id: ""
	I1208 01:59:33.123831 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.123840 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.123846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.123905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.152142 1136586 cri.go:89] found id: ""
	I1208 01:59:33.152166 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.152175 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.152184 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.152195 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:33.210457 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.210492 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.225387 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.225415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.288797 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.288820 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:33.288834 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:33.314642 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.314675 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:35.847043 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.865523 1136586 out.go:203] 
	W1208 01:59:35.868530 1136586 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 01:59:35.868757 1136586 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 01:59:35.868776 1136586 out.go:285] * Related issues:
	W1208 01:59:35.868792 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1208 01:59:35.868833 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1208 01:59:35.873508 1136586 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786139868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786216570Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786327677Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786398012Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786492035Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786558530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786618051Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786677259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786750130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786832108Z" level=info msg="Connect containerd service"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.787154187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.787806804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801520989Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801594475Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801680802Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801735276Z" level=info msg="Start recovering state"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842441332Z" level=info msg="Start event monitor"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842660328Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842725527Z" level=info msg="Start streaming server"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842808506Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842872334Z" level=info msg="runtime interface starting up..."
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842928778Z" level=info msg="starting plugins..."
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.843007934Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:53:32 newest-cni-457779 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.845003152Z" level=info msg="containerd successfully booted in 0.084434s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:45.585068   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:45.585630   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:45.587394   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:45.588113   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:45.591844   13757 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:59:45 up  6:42,  0 user,  load average: 1.25, 0.88, 1.24
	Linux newest-cni-457779 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:59:41 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:41 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:41 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:42 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:42 newest-cni-457779 kubelet[13599]: E1208 01:59:42.924476   13599 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:42 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:42 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:43 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 08 01:59:43 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:43 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:43 newest-cni-457779 kubelet[13636]: E1208 01:59:43.655802   13636 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:43 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:43 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:44 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 08 01:59:44 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:44 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:44 newest-cni-457779 kubelet[13657]: E1208 01:59:44.408428   13657 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:44 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:44 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:45 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 08 01:59:45 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:45 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:45 newest-cni-457779 kubelet[13670]: E1208 01:59:45.221906   13670 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:45 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:45 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (341.52713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-457779" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-457779
helpers_test.go:243: (dbg) docker inspect newest-cni-457779:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	        "Created": "2025-12-08T01:43:39.768991386Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1136714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:53:27.037311302Z",
	            "FinishedAt": "2025-12-08T01:53:25.665351923Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hostname",
	        "HostsPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/hosts",
	        "LogPath": "/var/lib/docker/containers/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515/638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515-json.log",
	        "Name": "/newest-cni-457779",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-457779:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-457779",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "638bfd2d42fac2f2e751d2c47909505bd711bda7f8ab05b84ddc15506eda5515",
	                "LowerDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4d98e305b3c25643b4acd14abffe6e15d4e80406dee9c99623ab9dbdcb50a9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-457779",
	                "Source": "/var/lib/docker/volumes/newest-cni-457779/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-457779",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-457779",
	                "name.minikube.sigs.k8s.io": "newest-cni-457779",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a1a947731c9f343bfc621f32c5e5e6b87b4d6596e40159c82f35b05d4b004c86",
	            "SandboxKey": "/var/run/docker/netns/a1a947731c9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33874"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33875"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-457779": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d0:aa:7b:8e:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e759035a3431798f7b6fae1fcd872afa7240c356fb1da4c53589714768a6edc3",
	                    "EndpointID": "88ca36c415275c64fba1e1779bb8c75173dfd0b7a6e82aa393b48ff675c0db50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-457779",
	                        "638bfd2d42fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (358.8249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-457779 logs -n 25: (1.620003699s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p embed-certs-719683                                                                                                                                                                                                                                      │ embed-certs-719683           │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ delete  │ -p disable-driver-mounts-879407                                                                                                                                                                                                                            │ disable-driver-mounts-879407 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:41 UTC │ 08 Dec 25 01:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ stop    │ -p default-k8s-diff-port-843696 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:42 UTC │
	│ start   │ -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:42 UTC │ 08 Dec 25 01:43 UTC │
	│ image   │ default-k8s-diff-port-843696 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ pause   │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ unpause │ -p default-k8s-diff-port-843696 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ delete  │ -p default-k8s-diff-port-843696                                                                                                                                                                                                                            │ default-k8s-diff-port-843696 │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │ 08 Dec 25 01:43 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-536520 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:45 UTC │                     │
	│ stop    │ -p no-preload-536520 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ addons  │ enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │ 08 Dec 25 01:47 UTC │
	│ start   │ -p no-preload-536520 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-536520            │ jenkins │ v1.37.0 │ 08 Dec 25 01:47 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-457779 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:51 UTC │                     │
	│ stop    │ -p newest-cni-457779 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-457779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │ 08 Dec 25 01:53 UTC │
	│ start   │ -p newest-cni-457779 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:53 UTC │                     │
	│ image   │ newest-cni-457779 image list --format=json                                                                                                                                                                                                                 │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:59 UTC │ 08 Dec 25 01:59 UTC │
	│ pause   │ -p newest-cni-457779 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:59 UTC │ 08 Dec 25 01:59 UTC │
	│ unpause │ -p newest-cni-457779 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-457779            │ jenkins │ v1.37.0 │ 08 Dec 25 01:59 UTC │ 08 Dec 25 01:59 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 01:53:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 01:53:26.756000 1136586 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:53:26.756538 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756548 1136586 out.go:374] Setting ErrFile to fd 2...
	I1208 01:53:26.756553 1136586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:53:26.756842 1136586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:53:26.757268 1136586 out.go:368] Setting JSON to false
	I1208 01:53:26.758219 1136586 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":23760,"bootTime":1765135047,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:53:26.758285 1136586 start.go:143] virtualization:  
	I1208 01:53:26.761027 1136586 out.go:179] * [newest-cni-457779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:53:26.763300 1136586 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:53:26.763385 1136586 notify.go:221] Checking for updates...
	I1208 01:53:26.769236 1136586 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:53:26.772301 1136586 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:26.775351 1136586 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:53:26.778370 1136586 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:53:26.781331 1136586 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:53:26.784939 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:26.785587 1136586 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:53:26.821497 1136586 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:53:26.821612 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.884858 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.874574541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.884969 1136586 docker.go:319] overlay module found
	I1208 01:53:26.888166 1136586 out.go:179] * Using the docker driver based on existing profile
	I1208 01:53:26.891132 1136586 start.go:309] selected driver: docker
	I1208 01:53:26.891162 1136586 start.go:927] validating driver "docker" against &{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.891271 1136586 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:53:26.892009 1136586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:53:26.946578 1136586 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:53:26.937487208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:53:26.946934 1136586 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1208 01:53:26.946970 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:26.947032 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:26.947088 1136586 start.go:353] cluster config:
	{Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:26.951997 1136586 out.go:179] * Starting "newest-cni-457779" primary control-plane node in "newest-cni-457779" cluster
	I1208 01:53:26.954840 1136586 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 01:53:26.957745 1136586 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 01:53:26.960653 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:26.960709 1136586 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I1208 01:53:26.960722 1136586 cache.go:65] Caching tarball of preloaded images
	I1208 01:53:26.960734 1136586 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 01:53:26.960819 1136586 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 01:53:26.960831 1136586 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
	I1208 01:53:26.961033 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:26.980599 1136586 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 01:53:26.980630 1136586 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 01:53:26.980646 1136586 cache.go:243] Successfully downloaded all kic artifacts
	I1208 01:53:26.980676 1136586 start.go:360] acquireMachinesLock for newest-cni-457779: {Name:mk3564dfd287c1162906838682a59fd937727bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 01:53:26.980741 1136586 start.go:364] duration metric: took 41.994µs to acquireMachinesLock for "newest-cni-457779"
	I1208 01:53:26.980766 1136586 start.go:96] Skipping create...Using existing machine configuration
	I1208 01:53:26.980775 1136586 fix.go:54] fixHost starting: 
	I1208 01:53:26.981064 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:26.998167 1136586 fix.go:112] recreateIfNeeded on newest-cni-457779: state=Stopped err=<nil>
	W1208 01:53:26.998205 1136586 fix.go:138] unexpected machine state, will restart: <nil>
	W1208 01:53:25.593347 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:27.593483 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	W1208 01:53:30.093460 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:27.003360 1136586 out.go:252] * Restarting existing docker container for "newest-cni-457779" ...
	I1208 01:53:27.003497 1136586 cli_runner.go:164] Run: docker start newest-cni-457779
	I1208 01:53:27.261076 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:27.282732 1136586 kic.go:430] container "newest-cni-457779" state is running.
	I1208 01:53:27.283122 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:27.311045 1136586 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/config.json ...
	I1208 01:53:27.311287 1136586 machine.go:94] provisionDockerMachine start ...
	I1208 01:53:27.311346 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:27.335078 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:27.335680 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:27.335692 1136586 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 01:53:27.336739 1136586 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 01:53:30.502303 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.502328 1136586 ubuntu.go:182] provisioning hostname "newest-cni-457779"
	I1208 01:53:30.502403 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.520473 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.520821 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.520832 1136586 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-457779 && echo "newest-cni-457779" | sudo tee /etc/hostname
	I1208 01:53:30.680340 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-457779
	
	I1208 01:53:30.680522 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:30.698887 1136586 main.go:143] libmachine: Using SSH client type: native
	I1208 01:53:30.699207 1136586 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33873 <nil> <nil>}
	I1208 01:53:30.699230 1136586 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-457779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-457779/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-457779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 01:53:30.850881 1136586 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 01:53:30.850907 1136586 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 01:53:30.850931 1136586 ubuntu.go:190] setting up certificates
	I1208 01:53:30.850939 1136586 provision.go:84] configureAuth start
	I1208 01:53:30.851000 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:30.868852 1136586 provision.go:143] copyHostCerts
	I1208 01:53:30.868925 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 01:53:30.868935 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 01:53:30.869018 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 01:53:30.869113 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 01:53:30.869119 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 01:53:30.869143 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 01:53:30.869192 1136586 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 01:53:30.869197 1136586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 01:53:30.869218 1136586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 01:53:30.869262 1136586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.newest-cni-457779 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-457779]
	I1208 01:53:31.146721 1136586 provision.go:177] copyRemoteCerts
	I1208 01:53:31.146819 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 01:53:31.146887 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.165202 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.270344 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 01:53:31.288520 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1208 01:53:31.307009 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1208 01:53:31.325139 1136586 provision.go:87] duration metric: took 474.176778ms to configureAuth
	I1208 01:53:31.325166 1136586 ubuntu.go:206] setting minikube options for container-runtime
	I1208 01:53:31.325413 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:31.325428 1136586 machine.go:97] duration metric: took 4.014132188s to provisionDockerMachine
	I1208 01:53:31.325438 1136586 start.go:293] postStartSetup for "newest-cni-457779" (driver="docker")
	I1208 01:53:31.325453 1136586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 01:53:31.325527 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 01:53:31.325572 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.342958 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.450484 1136586 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 01:53:31.453930 1136586 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 01:53:31.453961 1136586 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 01:53:31.453978 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 01:53:31.454035 1136586 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 01:53:31.454126 1136586 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 01:53:31.454236 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 01:53:31.461814 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:31.480492 1136586 start.go:296] duration metric: took 155.029827ms for postStartSetup
	I1208 01:53:31.480576 1136586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:53:31.480620 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.498567 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.608416 1136586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 01:53:31.613302 1136586 fix.go:56] duration metric: took 4.632518901s for fixHost
	I1208 01:53:31.613327 1136586 start.go:83] releasing machines lock for "newest-cni-457779", held for 4.632572375s
	I1208 01:53:31.613414 1136586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-457779
	I1208 01:53:31.630699 1136586 ssh_runner.go:195] Run: cat /version.json
	I1208 01:53:31.630750 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.630785 1136586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 01:53:31.630847 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:31.650759 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.653824 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:31.754273 1136586 ssh_runner.go:195] Run: systemctl --version
	I1208 01:53:31.849639 1136586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 01:53:31.855754 1136586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 01:53:31.855850 1136586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 01:53:31.866557 1136586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1208 01:53:31.866588 1136586 start.go:496] detecting cgroup driver to use...
	I1208 01:53:31.866621 1136586 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 01:53:31.866707 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 01:53:31.887994 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 01:53:31.906727 1136586 docker.go:218] disabling cri-docker service (if available) ...
	I1208 01:53:31.906830 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 01:53:31.922954 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 01:53:31.936664 1136586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 01:53:32.054316 1136586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 01:53:32.173483 1136586 docker.go:234] disabling docker service ...
	I1208 01:53:32.173578 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 01:53:32.189444 1136586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 01:53:32.206742 1136586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 01:53:32.325262 1136586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 01:53:32.443602 1136586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 01:53:32.456770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 01:53:32.473213 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 01:53:32.483724 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 01:53:32.493138 1136586 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 01:53:32.493251 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 01:53:32.502652 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.512217 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 01:53:32.521333 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 01:53:32.530989 1136586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 01:53:32.539889 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 01:53:32.549127 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 01:53:32.558425 1136586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 01:53:32.567684 1136586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 01:53:32.575542 1136586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 01:53:32.583139 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:32.723777 1136586 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 01:53:32.846014 1136586 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 01:53:32.846088 1136586 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 01:53:32.849865 1136586 start.go:564] Will wait 60s for crictl version
	I1208 01:53:32.849924 1136586 ssh_runner.go:195] Run: which crictl
	I1208 01:53:32.853562 1136586 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 01:53:32.880330 1136586 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 01:53:32.880452 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.901579 1136586 ssh_runner.go:195] Run: containerd --version
	I1208 01:53:32.928462 1136586 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
	I1208 01:53:32.931363 1136586 cli_runner.go:164] Run: docker network inspect newest-cni-457779 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 01:53:32.945897 1136586 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 01:53:32.950021 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:32.963090 1136586 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1208 01:53:32.593363 1128548 node_ready.go:55] error getting node "no-preload-536520" condition "Ready" status (will retry): Get "https://192.168.85.2:8443/api/v1/nodes/no-preload-536520": dial tcp 192.168.85.2:8443: connect: connection refused
	I1208 01:53:33.093099 1128548 node_ready.go:38] duration metric: took 6m0.00024354s for node "no-preload-536520" to be "Ready" ...
	I1208 01:53:33.096356 1128548 out.go:203] 
	W1208 01:53:33.099424 1128548 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1208 01:53:33.099449 1128548 out.go:285] * 
	W1208 01:53:33.101601 1128548 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1208 01:53:33.103637 1128548 out.go:203] 
	I1208 01:53:32.966006 1136586 kubeadm.go:884] updating cluster {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 01:53:32.966181 1136586 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1208 01:53:32.966277 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.001671 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.001709 1136586 containerd.go:534] Images already preloaded, skipping extraction
	I1208 01:53:33.001783 1136586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 01:53:33.037763 1136586 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 01:53:33.037789 1136586 cache_images.go:86] Images are preloaded, skipping loading
	I1208 01:53:33.037796 1136586 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1208 01:53:33.037895 1136586 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-457779 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 01:53:33.037971 1136586 ssh_runner.go:195] Run: sudo crictl info
	I1208 01:53:33.063762 1136586 cni.go:84] Creating CNI manager for ""
	I1208 01:53:33.063790 1136586 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 01:53:33.063814 1136586 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1208 01:53:33.063838 1136586 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-457779 NodeName:newest-cni-457779 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 01:53:33.063976 1136586 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-457779"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 01:53:33.064046 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1208 01:53:33.072124 1136586 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 01:53:33.072199 1136586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 01:53:33.079978 1136586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1208 01:53:33.094440 1136586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1208 01:53:33.114285 1136586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1208 01:53:33.148370 1136586 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 01:53:33.154333 1136586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 01:53:33.175383 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:33.368419 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:33.425889 1136586 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779 for IP: 192.168.76.2
	I1208 01:53:33.425915 1136586 certs.go:195] generating shared ca certs ...
	I1208 01:53:33.425933 1136586 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:33.426101 1136586 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 01:53:33.426153 1136586 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 01:53:33.426161 1136586 certs.go:257] generating profile certs ...
	I1208 01:53:33.426267 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/client.key
	I1208 01:53:33.426332 1136586 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key.c0ab0399
	I1208 01:53:33.426377 1136586 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key
	I1208 01:53:33.426524 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 01:53:33.426568 1136586 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 01:53:33.426582 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 01:53:33.426612 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 01:53:33.426642 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 01:53:33.426669 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 01:53:33.426734 1136586 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 01:53:33.427335 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 01:53:33.467362 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 01:53:33.494653 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 01:53:33.520274 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 01:53:33.539143 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1208 01:53:33.558359 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 01:53:33.583585 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 01:53:33.606437 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/newest-cni-457779/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 01:53:33.629051 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 01:53:33.649569 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 01:53:33.670329 1136586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 01:53:33.709388 1136586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 01:53:33.723127 1136586 ssh_runner.go:195] Run: openssl version
	I1208 01:53:33.729848 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.737400 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 01:53:33.744968 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749630 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.749695 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 01:53:33.792574 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 01:53:33.800140 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.812741 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 01:53:33.821534 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825755 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.825831 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 01:53:33.873472 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 01:53:33.882187 1136586 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.890767 1136586 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 01:53:33.901446 1136586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907874 1136586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.907943 1136586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 01:53:33.952061 1136586 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 01:53:33.960568 1136586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 01:53:33.965214 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 01:53:34.008563 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 01:53:34.055484 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 01:53:34.112335 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 01:53:34.165388 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 01:53:34.216189 1136586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 01:53:34.263034 1136586 kubeadm.go:401] StartCluster: {Name:newest-cni-457779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-457779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 01:53:34.263135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 01:53:34.263235 1136586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 01:53:34.294120 1136586 cri.go:89] found id: ""
	I1208 01:53:34.294243 1136586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 01:53:34.304846 1136586 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 01:53:34.304879 1136586 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 01:53:34.304960 1136586 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 01:53:34.316473 1136586 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 01:53:34.317189 1136586 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-457779" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.317527 1136586 kubeconfig.go:62] /home/jenkins/minikube-integration/22054-843440/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-457779" cluster setting kubeconfig missing "newest-cni-457779" context setting]
	I1208 01:53:34.318043 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.319993 1136586 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 01:53:34.332564 1136586 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1208 01:53:34.332599 1136586 kubeadm.go:602] duration metric: took 27.712722ms to restartPrimaryControlPlane
	I1208 01:53:34.332638 1136586 kubeadm.go:403] duration metric: took 69.60712ms to StartCluster
	I1208 01:53:34.332662 1136586 settings.go:142] acquiring lock: {Name:mk7f1e6ee0926e03bdeb293ba0f47cde549dc315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.332751 1136586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:53:34.333761 1136586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/kubeconfig: {Name:mk0e56756fec9ab88ccb2f535d19d66469047d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 01:53:34.334050 1136586 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 01:53:34.334509 1136586 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 01:53:34.334590 1136586 config.go:182] Loaded profile config "newest-cni-457779": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:53:34.334604 1136586 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-457779"
	I1208 01:53:34.334619 1136586 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-457779"
	I1208 01:53:34.334646 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.334654 1136586 addons.go:70] Setting dashboard=true in profile "newest-cni-457779"
	I1208 01:53:34.334664 1136586 addons.go:239] Setting addon dashboard=true in "newest-cni-457779"
	W1208 01:53:34.334680 1136586 addons.go:248] addon dashboard should already be in state true
	I1208 01:53:34.334701 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.335128 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.335222 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.338384 1136586 out.go:179] * Verifying Kubernetes components...
	I1208 01:53:34.338808 1136586 addons.go:70] Setting default-storageclass=true in profile "newest-cni-457779"
	I1208 01:53:34.338830 1136586 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-457779"
	I1208 01:53:34.339192 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.342236 1136586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 01:53:34.384696 1136586 addons.go:239] Setting addon default-storageclass=true in "newest-cni-457779"
	I1208 01:53:34.384738 1136586 host.go:66] Checking if "newest-cni-457779" exists ...
	I1208 01:53:34.385173 1136586 cli_runner.go:164] Run: docker container inspect newest-cni-457779 --format={{.State.Status}}
	I1208 01:53:34.395531 1136586 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1208 01:53:34.398489 1136586 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1208 01:53:34.401766 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1208 01:53:34.401802 1136586 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1208 01:53:34.401870 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.413624 1136586 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 01:53:34.416611 1136586 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.416635 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 01:53:34.416703 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.446412 1136586 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.446432 1136586 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 01:53:34.446519 1136586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-457779
	I1208 01:53:34.468661 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.486870 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.495400 1136586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33873 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/newest-cni-457779/id_rsa Username:docker}
	I1208 01:53:34.648143 1136586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 01:53:34.791310 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1208 01:53:34.791383 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1208 01:53:34.801259 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:34.809204 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:34.852787 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1208 01:53:34.852815 1136586 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1208 01:53:34.976510 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1208 01:53:34.976546 1136586 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1208 01:53:35.059518 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1208 01:53:35.059546 1136586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1208 01:53:35.081694 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1208 01:53:35.081725 1136586 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1208 01:53:35.097221 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1208 01:53:35.097249 1136586 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1208 01:53:35.113396 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1208 01:53:35.113423 1136586 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1208 01:53:35.128309 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1208 01:53:35.128332 1136586 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1208 01:53:35.144063 1136586 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.144088 1136586 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1208 01:53:35.163973 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:35.343568 1136586 api_server.go:52] waiting for apiserver process to appear ...
	I1208 01:53:35.343639 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:35.343728 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343749 1136586 retry.go:31] will retry after 313.237886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343796 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343802 1136586 retry.go:31] will retry after 267.065812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.343986 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.343999 1136586 retry.go:31] will retry after 357.870271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.611924 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:35.657423 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:35.685479 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.685507 1136586 retry.go:31] will retry after 235.819569ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.702853 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:35.745089 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.745200 1136586 retry.go:31] will retry after 496.615001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:35.783116 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.783150 1136586 retry.go:31] will retry after 415.603405ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.844207 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:35.922577 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:35.992239 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:35.992284 1136586 retry.go:31] will retry after 419.233092ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.199657 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.242360 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.275822 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.275881 1136586 retry.go:31] will retry after 506.304834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:36.313961 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.313996 1136586 retry.go:31] will retry after 341.203132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.344211 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:36.412076 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:36.475666 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.475724 1136586 retry.go:31] will retry after 757.567155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.656038 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:36.717469 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.717504 1136586 retry.go:31] will retry after 858.45693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.782939 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:36.844509 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:36.857199 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:36.857314 1136586 retry.go:31] will retry after 1.254351113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.233554 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:37.293681 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.293719 1136586 retry.go:31] will retry after 1.120312347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.343808 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:37.576883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:37.657137 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.657170 1136586 retry.go:31] will retry after 1.273828893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:37.844396 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.111904 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:38.175735 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.175771 1136586 retry.go:31] will retry after 1.371961744s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.344170 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.414206 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:38.473557 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.473592 1136586 retry.go:31] will retry after 1.305474532s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.843968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:38.931790 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:38.991073 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:38.991107 1136586 retry.go:31] will retry after 2.323329318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.344538 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:39.548354 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:39.614499 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.614532 1136586 retry.go:31] will retry after 2.345376349s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.779883 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:39.839516 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.839550 1136586 retry.go:31] will retry after 1.632764803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:39.843744 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.343857 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:40.844131 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.314885 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:41.344468 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:41.399054 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.399086 1136586 retry.go:31] will retry after 1.628703977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.473438 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:41.539567 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.539608 1136586 retry.go:31] will retry after 4.6526683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:41.844314 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:41.960631 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:42.037435 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.037475 1136586 retry.go:31] will retry after 2.24839836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:42.343723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:42.843913 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.028344 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:43.092228 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.092267 1136586 retry.go:31] will retry after 6.138872071s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:43.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:43.843812 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:44.286696 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:53:44.343910 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:44.363154 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.363184 1136586 retry.go:31] will retry after 4.885412288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:44.843802 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.344023 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:45.844504 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.193193 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:53:46.256318 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.256352 1136586 retry.go:31] will retry after 6.576205276s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:46.344576 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:46.844679 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.343751 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:47.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.344358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:48.843925 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.231766 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 01:53:49.249321 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:49.295577 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.295606 1136586 retry.go:31] will retry after 5.897796539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:53:49.321879 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.321913 1136586 retry.go:31] will retry after 5.135606393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:49.343793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:49.843777 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:50.844708 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.344109 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:51.844601 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.344090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:52.833191 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:53:52.843854 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:53:52.942603 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:52.942641 1136586 retry.go:31] will retry after 10.350172314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:53.344347 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:53.843800 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.343948 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:54.457681 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:53:54.519827 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.519864 1136586 retry.go:31] will retry after 12.267694675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:54.844117 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.193625 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:53:55.256579 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.256612 1136586 retry.go:31] will retry after 11.163170119s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:53:55.343847 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:55.843783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.343814 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:56.844654 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.344616 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:57.844487 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.343880 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:58.843787 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.343848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:53:59.843826 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.343799 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:00.844518 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.343861 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:01.844575 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.343756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:02.844391 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:03.293666 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1208 01:54:03.344443 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:03.397612 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.397650 1136586 retry.go:31] will retry after 19.276295687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:03.844417 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.343968 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:04.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.344710 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:05.843828 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.344305 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:06.420172 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:06.484485 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.484519 1136586 retry.go:31] will retry after 9.376809348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.788188 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1208 01:54:06.843694 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1208 01:54:06.852042 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:06.852079 1136586 retry.go:31] will retry after 14.243902866s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:07.344022 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:07.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.344592 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:08.844723 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.344453 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:09.843950 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.344400 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:10.844496 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.343717 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:11.844737 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.344750 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:12.843793 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.343904 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:13.843827 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.343908 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:14.844260 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.344591 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.843791 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:15.862033 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:15.923558 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:15.923598 1136586 retry.go:31] will retry after 11.623443237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:16.344246 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:16.844386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.344635 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:17.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.344732 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:18.843932 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.344121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:19.844530 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.344183 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:20.844204 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.097241 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:21.169765 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.169803 1136586 retry.go:31] will retry after 14.268049825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:21.343856 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:21.844672 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.344587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:22.674615 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:22.733064 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.733093 1136586 retry.go:31] will retry after 25.324201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:22.844513 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.344392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:23.844423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.343928 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:24.844484 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.344404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:25.844721 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.344197 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:26.844678 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.343798 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:27.547765 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:27.612562 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.612601 1136586 retry.go:31] will retry after 28.822296594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:27.843863 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.344385 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:28.843784 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.344796 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:29.843768 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.344407 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:30.844544 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.343765 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:31.844221 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.343845 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:32.844333 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.344526 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:33.844321 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:34.344033 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:34.344149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:34.370172 1136586 cri.go:89] found id: ""
	I1208 01:54:34.370196 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.370205 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:34.370211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:34.370269 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:34.395619 1136586 cri.go:89] found id: ""
	I1208 01:54:34.395642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.395650 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:34.395656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:34.395720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:34.422963 1136586 cri.go:89] found id: ""
	I1208 01:54:34.422993 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.423003 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:34.423009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:34.423074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:34.451846 1136586 cri.go:89] found id: ""
	I1208 01:54:34.451871 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.451879 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:34.451886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:34.451951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:34.480597 1136586 cri.go:89] found id: ""
	I1208 01:54:34.480622 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.480631 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:34.480638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:34.480728 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:34.505381 1136586 cri.go:89] found id: ""
	I1208 01:54:34.505412 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.505421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:34.505427 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:34.505486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:34.531276 1136586 cri.go:89] found id: ""
	I1208 01:54:34.531304 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.531313 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:34.531320 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:34.531384 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:34.556518 1136586 cri.go:89] found id: ""
	I1208 01:54:34.556542 1136586 logs.go:282] 0 containers: []
	W1208 01:54:34.556550 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:34.556566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:34.556578 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:34.613370 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:34.613408 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:34.628308 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:34.628338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:34.694181 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:34.685922    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.686576    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688249    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.688761    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:34.690285    1842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:34.694202 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:34.694216 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:34.720374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:34.720425 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:35.438126 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:54:35.498508 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:35.498543 1136586 retry.go:31] will retry after 43.888808015s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:37.252653 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:37.264309 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:37.264385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:37.296827 1136586 cri.go:89] found id: ""
	I1208 01:54:37.296856 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.296865 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:37.296872 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:37.296938 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:37.322795 1136586 cri.go:89] found id: ""
	I1208 01:54:37.322818 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.322826 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:37.322832 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:37.322890 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:37.347015 1136586 cri.go:89] found id: ""
	I1208 01:54:37.347039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.347048 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:37.347054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:37.347112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:37.376654 1136586 cri.go:89] found id: ""
	I1208 01:54:37.376685 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.376694 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:37.376702 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:37.376768 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:37.402392 1136586 cri.go:89] found id: ""
	I1208 01:54:37.402419 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.402428 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:37.402434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:37.402531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:37.427265 1136586 cri.go:89] found id: ""
	I1208 01:54:37.427292 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.427302 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:37.427308 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:37.427375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:37.452009 1136586 cri.go:89] found id: ""
	I1208 01:54:37.452036 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.452046 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:37.452052 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:37.452113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:37.478250 1136586 cri.go:89] found id: ""
	I1208 01:54:37.478274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:37.478282 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:37.478292 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:37.478303 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:37.492990 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:37.493059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:37.560010 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:37.551514    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.552088    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.553825    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.554515    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:37.556053    1958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:37.560033 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:37.560046 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:37.586791 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:37.586827 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:37.617527 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:37.617603 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.174865 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:40.187458 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:40.187538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:40.216164 1136586 cri.go:89] found id: ""
	I1208 01:54:40.216195 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.216204 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:40.216211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:40.216280 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:40.243524 1136586 cri.go:89] found id: ""
	I1208 01:54:40.243552 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.243561 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:40.243567 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:40.243632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:40.273554 1136586 cri.go:89] found id: ""
	I1208 01:54:40.273582 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.273592 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:40.273598 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:40.273660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:40.301228 1136586 cri.go:89] found id: ""
	I1208 01:54:40.301249 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.301257 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:40.301263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:40.301321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:40.330159 1136586 cri.go:89] found id: ""
	I1208 01:54:40.330179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.330187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:40.330193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:40.330252 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:40.355514 1136586 cri.go:89] found id: ""
	I1208 01:54:40.355583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.355604 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:40.355611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:40.355685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:40.381442 1136586 cri.go:89] found id: ""
	I1208 01:54:40.381468 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.381477 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:40.381483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:40.381539 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:40.406014 1136586 cri.go:89] found id: ""
	I1208 01:54:40.406039 1136586 logs.go:282] 0 containers: []
	W1208 01:54:40.406048 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:40.406057 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:40.406069 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:40.465966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:40.458498    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.458883    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460242    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.460569    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:40.462027    2068 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:40.465986 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:40.466000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:40.490766 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:40.490799 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:40.518111 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:40.518140 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:40.573667 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:40.573702 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.088883 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:43.112185 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:43.112253 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:43.175929 1136586 cri.go:89] found id: ""
	I1208 01:54:43.175952 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.175960 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:43.175966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:43.176037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:43.208920 1136586 cri.go:89] found id: ""
	I1208 01:54:43.208946 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.208955 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:43.208961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:43.209024 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:43.235210 1136586 cri.go:89] found id: ""
	I1208 01:54:43.235235 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.235245 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:43.235252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:43.235319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:43.263618 1136586 cri.go:89] found id: ""
	I1208 01:54:43.263642 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.263658 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:43.263666 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:43.263727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:43.290748 1136586 cri.go:89] found id: ""
	I1208 01:54:43.290783 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.290792 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:43.290798 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:43.290857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:43.314874 1136586 cri.go:89] found id: ""
	I1208 01:54:43.314898 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.314906 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:43.314913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:43.314975 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:43.339655 1136586 cri.go:89] found id: ""
	I1208 01:54:43.339680 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.339707 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:43.339713 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:43.339777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:43.364203 1136586 cri.go:89] found id: ""
	I1208 01:54:43.364230 1136586 logs.go:282] 0 containers: []
	W1208 01:54:43.364240 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:43.364250 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:43.364261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:43.390041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:43.390079 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:43.420626 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:43.420661 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:43.475834 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:43.475876 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:43.491658 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:43.491696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:43.559609 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:43.550387    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.551253    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.552993    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.553652    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:43.555343    2197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.059911 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:46.070737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:46.070825 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:46.110556 1136586 cri.go:89] found id: ""
	I1208 01:54:46.110583 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.110593 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:46.110600 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:46.110665 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:46.186917 1136586 cri.go:89] found id: ""
	I1208 01:54:46.186942 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.186951 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:46.186957 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:46.187021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:46.212604 1136586 cri.go:89] found id: ""
	I1208 01:54:46.212631 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.212639 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:46.212646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:46.212724 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:46.239989 1136586 cri.go:89] found id: ""
	I1208 01:54:46.240043 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.240054 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:46.240060 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:46.240217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:46.266799 1136586 cri.go:89] found id: ""
	I1208 01:54:46.266829 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.266839 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:46.266845 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:46.266918 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:46.294724 1136586 cri.go:89] found id: ""
	I1208 01:54:46.294753 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.294762 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:46.294769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:46.294829 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:46.320725 1136586 cri.go:89] found id: ""
	I1208 01:54:46.320754 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.320764 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:46.320771 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:46.320854 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:46.350768 1136586 cri.go:89] found id: ""
	I1208 01:54:46.350792 1136586 logs.go:282] 0 containers: []
	W1208 01:54:46.350801 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:46.350810 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:46.350822 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:46.416454 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:46.407778    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.408509    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410162    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.410818    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:46.412543    2292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:46.416490 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:46.416510 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:46.442082 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:46.442115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:46.474546 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:46.474573 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:46.532104 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:46.532141 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:48.057590 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:54:48.120301 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:48.120337 1136586 retry.go:31] will retry after 17.544839516s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1208 01:54:49.047527 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:49.058154 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:49.058224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:49.087906 1136586 cri.go:89] found id: ""
	I1208 01:54:49.087974 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.087999 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:49.088010 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:49.088086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:49.147486 1136586 cri.go:89] found id: ""
	I1208 01:54:49.147562 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.147585 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:49.147603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:49.147699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:49.190637 1136586 cri.go:89] found id: ""
	I1208 01:54:49.190712 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.190735 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:49.190755 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:49.190842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:49.222497 1136586 cri.go:89] found id: ""
	I1208 01:54:49.222525 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.222534 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:49.222549 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:49.222624 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:49.247026 1136586 cri.go:89] found id: ""
	I1208 01:54:49.247052 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.247061 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:49.247067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:49.247125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:49.275349 1136586 cri.go:89] found id: ""
	I1208 01:54:49.275378 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.275387 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:49.275394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:49.275499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:49.300792 1136586 cri.go:89] found id: ""
	I1208 01:54:49.300820 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.300829 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:49.300835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:49.300892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:49.325853 1136586 cri.go:89] found id: ""
	I1208 01:54:49.325882 1136586 logs.go:282] 0 containers: []
	W1208 01:54:49.325890 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:49.325900 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:49.325912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:49.384418 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:49.384468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:49.399275 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:49.399307 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:49.466718 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:49.458157    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.458602    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460310    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.460773    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:49.462192    2417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:49.466785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:49.466814 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:49.491769 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:49.491803 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:52.023420 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:52.034753 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:52.034828 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:52.064923 1136586 cri.go:89] found id: ""
	I1208 01:54:52.064945 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.064953 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:52.064960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:52.065022 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:52.104945 1136586 cri.go:89] found id: ""
	I1208 01:54:52.104968 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.104977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:52.104983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:52.105043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:52.171374 1136586 cri.go:89] found id: ""
	I1208 01:54:52.171395 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.171404 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:52.171410 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:52.171468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:52.201431 1136586 cri.go:89] found id: ""
	I1208 01:54:52.201476 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.201485 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:52.201492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:52.201563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:52.226892 1136586 cri.go:89] found id: ""
	I1208 01:54:52.226920 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.226929 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:52.226935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:52.227001 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:52.252811 1136586 cri.go:89] found id: ""
	I1208 01:54:52.252891 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.252914 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:52.252935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:52.253034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:52.282156 1136586 cri.go:89] found id: ""
	I1208 01:54:52.282179 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.282188 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:52.282195 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:52.282259 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:52.308580 1136586 cri.go:89] found id: ""
	I1208 01:54:52.308607 1136586 logs.go:282] 0 containers: []
	W1208 01:54:52.308618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:52.308628 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:52.308639 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:52.364992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:52.365028 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:52.379850 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:52.379877 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:52.445238 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:52.436912    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.437761    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439367    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.439683    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:52.441222    2528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:52.445260 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:52.445273 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:52.471470 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:52.471505 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:55.003548 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:55.026046 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:55.026131 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:55.053887 1136586 cri.go:89] found id: ""
	I1208 01:54:55.053964 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.053989 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:55.054009 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:55.054101 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:55.088698 1136586 cri.go:89] found id: ""
	I1208 01:54:55.088724 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.088733 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:55.088760 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:55.088849 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:55.170740 1136586 cri.go:89] found id: ""
	I1208 01:54:55.170776 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.170785 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:55.170791 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:55.170899 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:55.197620 1136586 cri.go:89] found id: ""
	I1208 01:54:55.197656 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.197666 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:55.197690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:55.197776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:55.223553 1136586 cri.go:89] found id: ""
	I1208 01:54:55.223580 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.223589 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:55.223595 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:55.223680 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:55.248608 1136586 cri.go:89] found id: ""
	I1208 01:54:55.248677 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.248692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:55.248699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:55.248765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:55.274165 1136586 cri.go:89] found id: ""
	I1208 01:54:55.274232 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.274254 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:55.274272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:55.274361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:55.300558 1136586 cri.go:89] found id: ""
	I1208 01:54:55.300590 1136586 logs.go:282] 0 containers: []
	W1208 01:54:55.300600 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:55.300611 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:55.300622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:55.360386 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:55.360422 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:55.375869 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:55.375899 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:55.447970 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:55.439084    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.439796    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.441452    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.442051    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:55.443786    2641 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:55.447993 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:55.448005 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:55.473774 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:55.473808 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:54:56.435194 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1208 01:54:56.498425 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:54:56.498545 1136586 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:54:58.006121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:54:58.018380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:54:58.018521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:54:58.045144 1136586 cri.go:89] found id: ""
	I1208 01:54:58.045180 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.045189 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:54:58.045211 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:54:58.045296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:54:58.071125 1136586 cri.go:89] found id: ""
	I1208 01:54:58.071151 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.071160 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:54:58.071167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:54:58.071226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:54:58.121465 1136586 cri.go:89] found id: ""
	I1208 01:54:58.121492 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.121511 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:54:58.121519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:54:58.121589 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:54:58.182249 1136586 cri.go:89] found id: ""
	I1208 01:54:58.182274 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.182282 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:54:58.182288 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:54:58.182350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:54:58.211355 1136586 cri.go:89] found id: ""
	I1208 01:54:58.211380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.211389 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:54:58.211395 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:54:58.211458 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:54:58.239234 1136586 cri.go:89] found id: ""
	I1208 01:54:58.239262 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.239271 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:54:58.239278 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:54:58.239338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:54:58.268137 1136586 cri.go:89] found id: ""
	I1208 01:54:58.268212 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.268227 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:54:58.268235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:54:58.268311 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:54:58.298356 1136586 cri.go:89] found id: ""
	I1208 01:54:58.298380 1136586 logs.go:282] 0 containers: []
	W1208 01:54:58.298389 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:54:58.298399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:54:58.298483 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:54:58.356947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:54:58.356983 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:54:58.371448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:54:58.371475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:54:58.435566 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:54:58.427538    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.428174    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.429739    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.430336    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:54:58.431872    2761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:54:58.435589 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:54:58.435602 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:54:58.460122 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:54:58.460156 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:00.988330 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:00.999374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:00.999446 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:01.036571 1136586 cri.go:89] found id: ""
	I1208 01:55:01.036650 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.036687 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:01.036714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:01.036792 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:01.062231 1136586 cri.go:89] found id: ""
	I1208 01:55:01.062257 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.062267 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:01.062274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:01.062333 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:01.087570 1136586 cri.go:89] found id: ""
	I1208 01:55:01.087592 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.087601 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:01.087608 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:01.087668 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:01.137796 1136586 cri.go:89] found id: ""
	I1208 01:55:01.137822 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.137831 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:01.137838 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:01.137905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:01.193217 1136586 cri.go:89] found id: ""
	I1208 01:55:01.193240 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.193249 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:01.193256 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:01.193322 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:01.225114 1136586 cri.go:89] found id: ""
	I1208 01:55:01.225191 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.225217 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:01.225236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:01.225335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:01.253406 1136586 cri.go:89] found id: ""
	I1208 01:55:01.253485 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.253510 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:01.253529 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:01.253641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:01.279950 1136586 cri.go:89] found id: ""
	I1208 01:55:01.280032 1136586 logs.go:282] 0 containers: []
	W1208 01:55:01.280058 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:01.280077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:01.280102 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:01.314699 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:01.314731 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:01.371902 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:01.371941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:01.387482 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:01.387511 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:01.454737 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:01.445966    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.446853    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448643    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.448979    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:01.450568    2887 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:01.454761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:01.454775 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:03.982003 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:03.993616 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:03.993689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:04.022115 1136586 cri.go:89] found id: ""
	I1208 01:55:04.022143 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.022152 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:04.022162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:04.022228 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:04.052694 1136586 cri.go:89] found id: ""
	I1208 01:55:04.052720 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.052730 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:04.052737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:04.052799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:04.077702 1136586 cri.go:89] found id: ""
	I1208 01:55:04.077728 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.077737 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:04.077750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:04.077812 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:04.141633 1136586 cri.go:89] found id: ""
	I1208 01:55:04.141668 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.141677 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:04.141683 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:04.141753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:04.188894 1136586 cri.go:89] found id: ""
	I1208 01:55:04.188962 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.188976 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:04.188983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:04.189051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:04.218926 1136586 cri.go:89] found id: ""
	I1208 01:55:04.218951 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.218960 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:04.218966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:04.219028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:04.244759 1136586 cri.go:89] found id: ""
	I1208 01:55:04.244786 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.244795 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:04.244802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:04.244885 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:04.270311 1136586 cri.go:89] found id: ""
	I1208 01:55:04.270337 1136586 logs.go:282] 0 containers: []
	W1208 01:55:04.270346 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:04.270377 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:04.270396 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:04.298563 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:04.298594 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:04.357076 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:04.357110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:04.372213 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:04.372255 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:04.437142 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:04.428336    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.429202    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.430905    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.431490    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:04.433182    3000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:04.437163 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:04.437176 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:05.665650 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1208 01:55:05.727737 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:05.727864 1136586 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:06.963817 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:06.974536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:06.974639 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:06.999437 1136586 cri.go:89] found id: ""
	I1208 01:55:06.999466 1136586 logs.go:282] 0 containers: []
	W1208 01:55:06.999475 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:06.999481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:06.999540 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:07.029225 1136586 cri.go:89] found id: ""
	I1208 01:55:07.029253 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.029262 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:07.029274 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:07.029343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:07.058657 1136586 cri.go:89] found id: ""
	I1208 01:55:07.058683 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.058692 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:07.058698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:07.058757 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:07.090130 1136586 cri.go:89] found id: ""
	I1208 01:55:07.090158 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.090168 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:07.090175 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:07.090236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:07.139122 1136586 cri.go:89] found id: ""
	I1208 01:55:07.139177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.139187 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:07.139194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:07.139261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:07.172306 1136586 cri.go:89] found id: ""
	I1208 01:55:07.172328 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.172336 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:07.172343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:07.172400 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:07.204660 1136586 cri.go:89] found id: ""
	I1208 01:55:07.204689 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.204698 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:07.204705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:07.204764 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:07.230319 1136586 cri.go:89] found id: ""
	I1208 01:55:07.230349 1136586 logs.go:282] 0 containers: []
	W1208 01:55:07.230358 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:07.230368 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:07.230380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:07.285979 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:07.286015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:07.301365 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:07.301391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:07.369069 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:07.360232    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.360985    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.362860    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.363322    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:07.364927    3106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:07.369140 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:07.369161 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:07.394018 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:07.394051 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:09.924985 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:09.935805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:09.935908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:09.962622 1136586 cri.go:89] found id: ""
	I1208 01:55:09.962647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.962656 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:09.962662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:09.962729 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:09.988243 1136586 cri.go:89] found id: ""
	I1208 01:55:09.988266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:09.988275 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:09.988283 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:09.988347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:10.019449 1136586 cri.go:89] found id: ""
	I1208 01:55:10.019482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.019492 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:10.019499 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:10.019570 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:10.048613 1136586 cri.go:89] found id: ""
	I1208 01:55:10.048637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.048646 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:10.048652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:10.048726 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:10.080915 1136586 cri.go:89] found id: ""
	I1208 01:55:10.080940 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.080949 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:10.080956 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:10.081021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:10.144352 1136586 cri.go:89] found id: ""
	I1208 01:55:10.144375 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.144384 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:10.144396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:10.144479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:10.182563 1136586 cri.go:89] found id: ""
	I1208 01:55:10.182586 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.182595 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:10.182601 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:10.182662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:10.213649 1136586 cri.go:89] found id: ""
	I1208 01:55:10.213682 1136586 logs.go:282] 0 containers: []
	W1208 01:55:10.213694 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:10.213706 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:10.213724 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:10.242084 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:10.242114 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:10.298146 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:10.298181 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:10.313543 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:10.313574 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:10.380205 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:10.372256    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.372703    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374298    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.374669    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:10.376084    3231 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:10.380228 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:10.380248 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:12.905658 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:12.916576 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:12.916648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:12.944122 1136586 cri.go:89] found id: ""
	I1208 01:55:12.944146 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.944155 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:12.944161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:12.944222 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:12.969438 1136586 cri.go:89] found id: ""
	I1208 01:55:12.969464 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.969473 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:12.969481 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:12.969542 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:12.997359 1136586 cri.go:89] found id: ""
	I1208 01:55:12.997388 1136586 logs.go:282] 0 containers: []
	W1208 01:55:12.997397 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:12.997403 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:12.997470 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:13.025718 1136586 cri.go:89] found id: ""
	I1208 01:55:13.025746 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.025756 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:13.025763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:13.025823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:13.056865 1136586 cri.go:89] found id: ""
	I1208 01:55:13.056892 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.056902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:13.056908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:13.056969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:13.082432 1136586 cri.go:89] found id: ""
	I1208 01:55:13.082528 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.082546 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:13.082554 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:13.082626 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:13.125069 1136586 cri.go:89] found id: ""
	I1208 01:55:13.125144 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.125168 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:13.125187 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:13.125272 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:13.178384 1136586 cri.go:89] found id: ""
	I1208 01:55:13.178482 1136586 logs.go:282] 0 containers: []
	W1208 01:55:13.178507 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:13.178529 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:13.178567 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:13.239609 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:13.239644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:13.256212 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:13.256240 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:13.323842 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:13.315708    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.316122    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317629    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.317952    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:13.319386    3333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:13.323920 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:13.323949 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:13.348533 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:13.348570 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:15.879223 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:15.890243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:15.890364 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:15.914857 1136586 cri.go:89] found id: ""
	I1208 01:55:15.914886 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.914894 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:15.914901 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:15.914960 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:15.939097 1136586 cri.go:89] found id: ""
	I1208 01:55:15.939123 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.939134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:15.939140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:15.939201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:15.964064 1136586 cri.go:89] found id: ""
	I1208 01:55:15.964088 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.964097 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:15.964103 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:15.964167 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:15.989749 1136586 cri.go:89] found id: ""
	I1208 01:55:15.989789 1136586 logs.go:282] 0 containers: []
	W1208 01:55:15.989798 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:15.989805 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:15.989864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:16.017523 1136586 cri.go:89] found id: ""
	I1208 01:55:16.017558 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.017567 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:16.017573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:16.017638 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:16.043968 1136586 cri.go:89] found id: ""
	I1208 01:55:16.043996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.044005 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:16.044012 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:16.044077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:16.068942 1136586 cri.go:89] found id: ""
	I1208 01:55:16.069012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.069038 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:16.069057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:16.069149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:16.110088 1136586 cri.go:89] found id: ""
	I1208 01:55:16.110117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:16.110127 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:16.110136 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:16.110147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:16.194161 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:16.194206 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:16.209083 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:16.209108 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:16.278327 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:16.269119    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.269607    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271240    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.271986    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:16.273746    3446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:16.278346 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:16.278361 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:16.304026 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:16.304059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:18.833542 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:18.844944 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:18.845029 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:18.871187 1136586 cri.go:89] found id: ""
	I1208 01:55:18.871210 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.871220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:18.871226 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:18.871287 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:18.899377 1136586 cri.go:89] found id: ""
	I1208 01:55:18.899399 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.899407 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:18.899413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:18.899473 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:18.924554 1136586 cri.go:89] found id: ""
	I1208 01:55:18.924578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.924587 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:18.924593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:18.924653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:18.949910 1136586 cri.go:89] found id: ""
	I1208 01:55:18.949932 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.949941 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:18.949947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:18.950008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:18.974978 1136586 cri.go:89] found id: ""
	I1208 01:55:18.975001 1136586 logs.go:282] 0 containers: []
	W1208 01:55:18.975009 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:18.975015 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:18.975074 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:19.005380 1136586 cri.go:89] found id: ""
	I1208 01:55:19.005411 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.005421 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:19.005429 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:19.005503 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:19.032668 1136586 cri.go:89] found id: ""
	I1208 01:55:19.032750 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.032765 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:19.032780 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:19.032843 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:19.059531 1136586 cri.go:89] found id: ""
	I1208 01:55:19.059562 1136586 logs.go:282] 0 containers: []
	W1208 01:55:19.059572 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:19.059602 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:19.059619 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:19.121579 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:19.121613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:19.138076 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:19.138103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:19.222963 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:19.212805    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.213946    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.215722    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.216436    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:19.217965    3560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:19.222987 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:19.223000 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:19.253325 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:19.253368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:19.388285 1136586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1208 01:55:19.459805 1136586 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1208 01:55:19.459968 1136586 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1208 01:55:19.463177 1136586 out.go:179] * Enabled addons: 
	I1208 01:55:19.465938 1136586 addons.go:530] duration metric: took 1m45.131432136s for enable addons: enabled=[]
	I1208 01:55:21.781716 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:21.792431 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:21.792512 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:21.819119 1136586 cri.go:89] found id: ""
	I1208 01:55:21.819147 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.819157 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:21.819164 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:21.819230 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:21.848715 1136586 cri.go:89] found id: ""
	I1208 01:55:21.848751 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.848760 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:21.848767 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:21.848826 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:21.873926 1136586 cri.go:89] found id: ""
	I1208 01:55:21.873952 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.873961 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:21.873968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:21.874028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:21.900968 1136586 cri.go:89] found id: ""
	I1208 01:55:21.900995 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.901005 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:21.901011 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:21.901071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:21.929497 1136586 cri.go:89] found id: ""
	I1208 01:55:21.929524 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.929533 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:21.929540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:21.929600 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:21.954914 1136586 cri.go:89] found id: ""
	I1208 01:55:21.954936 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.954951 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:21.954959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:21.955020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:21.985551 1136586 cri.go:89] found id: ""
	I1208 01:55:21.985578 1136586 logs.go:282] 0 containers: []
	W1208 01:55:21.985586 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:21.985593 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:21.985656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:22.016148 1136586 cri.go:89] found id: ""
	I1208 01:55:22.016222 1136586 logs.go:282] 0 containers: []
	W1208 01:55:22.016244 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:22.016266 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:22.016305 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:22.049513 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:22.049585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:22.109605 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:22.109713 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:22.126061 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:22.126134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:22.225148 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:22.217274    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.217915    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.218929    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.219481    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:22.221120    3692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:22.225170 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:22.225183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:24.750628 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:24.761806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:24.761883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:24.787831 1136586 cri.go:89] found id: ""
	I1208 01:55:24.787855 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.787864 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:24.787871 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:24.787931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:24.816489 1136586 cri.go:89] found id: ""
	I1208 01:55:24.816516 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.816526 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:24.816533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:24.816631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:24.843224 1136586 cri.go:89] found id: ""
	I1208 01:55:24.843247 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.843256 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:24.843262 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:24.843324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:24.869163 1136586 cri.go:89] found id: ""
	I1208 01:55:24.869186 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.869195 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:24.869202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:24.869261 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:24.896657 1136586 cri.go:89] found id: ""
	I1208 01:55:24.896685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.896695 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:24.896701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:24.896763 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:24.924888 1136586 cri.go:89] found id: ""
	I1208 01:55:24.924918 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.924927 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:24.924934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:24.924999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:24.951093 1136586 cri.go:89] found id: ""
	I1208 01:55:24.951117 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.951126 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:24.951133 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:24.951196 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:24.980609 1136586 cri.go:89] found id: ""
	I1208 01:55:24.980633 1136586 logs.go:282] 0 containers: []
	W1208 01:55:24.980642 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:24.980651 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:24.980662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:25.036369 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:25.036404 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:25.057565 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:25.057647 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:25.200105 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:25.189333    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.190129    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192138    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.192915    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:25.194912    3789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:25.200136 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:25.200151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:25.227358 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:25.227398 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:27.756955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:27.767899 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:27.767972 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:27.795426 1136586 cri.go:89] found id: ""
	I1208 01:55:27.795451 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.795460 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:27.795466 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:27.795529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:27.821100 1136586 cri.go:89] found id: ""
	I1208 01:55:27.821127 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.821137 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:27.821143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:27.821213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:27.851486 1136586 cri.go:89] found id: ""
	I1208 01:55:27.851509 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.851518 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:27.851524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:27.851583 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:27.881644 1136586 cri.go:89] found id: ""
	I1208 01:55:27.881665 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.881673 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:27.881681 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:27.881739 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:27.911149 1136586 cri.go:89] found id: ""
	I1208 01:55:27.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.911185 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:27.911191 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:27.911296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:27.935972 1136586 cri.go:89] found id: ""
	I1208 01:55:27.936042 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.936069 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:27.936084 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:27.936158 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:27.961735 1136586 cri.go:89] found id: ""
	I1208 01:55:27.961762 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.961772 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:27.961778 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:27.961845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:27.987428 1136586 cri.go:89] found id: ""
	I1208 01:55:27.987452 1136586 logs.go:282] 0 containers: []
	W1208 01:55:27.987461 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:27.987471 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:27.987482 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:28.018603 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:28.018646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:28.051322 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:28.051395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:28.116115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:28.116154 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:28.140270 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:28.140297 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:28.224200 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:28.213883    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.214376    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218218    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.218825    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:28.220332    3923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:30.725898 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:30.736353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:30.736438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:30.764621 1136586 cri.go:89] found id: ""
	I1208 01:55:30.764647 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.764667 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:30.764691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:30.764772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:30.790477 1136586 cri.go:89] found id: ""
	I1208 01:55:30.790502 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.790510 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:30.790516 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:30.790577 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:30.816436 1136586 cri.go:89] found id: ""
	I1208 01:55:30.816522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.816539 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:30.816547 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:30.816625 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:30.845918 1136586 cri.go:89] found id: ""
	I1208 01:55:30.845944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.845953 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:30.845960 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:30.846020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:30.870263 1136586 cri.go:89] found id: ""
	I1208 01:55:30.870307 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.870317 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:30.870323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:30.870388 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:30.896013 1136586 cri.go:89] found id: ""
	I1208 01:55:30.896041 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.896049 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:30.896057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:30.896174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:30.921585 1136586 cri.go:89] found id: ""
	I1208 01:55:30.921612 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.921621 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:30.921628 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:30.921689 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:30.951330 1136586 cri.go:89] found id: ""
	I1208 01:55:30.951355 1136586 logs.go:282] 0 containers: []
	W1208 01:55:30.951365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:30.951374 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:30.951391 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:30.977110 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:30.977151 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:31.009469 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:31.009525 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:31.071586 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:31.071635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:31.087881 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:31.087927 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:31.188603 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:31.173005    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175001    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.175960    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.177836    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:31.178524    4031 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:33.688896 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:33.699658 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:33.699730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:33.723918 1136586 cri.go:89] found id: ""
	I1208 01:55:33.723944 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.723952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:33.723959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:33.724017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:33.748249 1136586 cri.go:89] found id: ""
	I1208 01:55:33.748272 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.748281 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:33.748287 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:33.748361 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:33.774082 1136586 cri.go:89] found id: ""
	I1208 01:55:33.774165 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.774188 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:33.774208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:33.774300 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:33.804783 1136586 cri.go:89] found id: ""
	I1208 01:55:33.804808 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.804817 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:33.804824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:33.804883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:33.830537 1136586 cri.go:89] found id: ""
	I1208 01:55:33.830568 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.830578 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:33.830584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:33.830645 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:33.855676 1136586 cri.go:89] found id: ""
	I1208 01:55:33.855702 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.855711 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:33.855719 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:33.855788 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:33.881829 1136586 cri.go:89] found id: ""
	I1208 01:55:33.881907 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.881943 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:33.881968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:33.882061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:33.911849 1136586 cri.go:89] found id: ""
	I1208 01:55:33.911872 1136586 logs.go:282] 0 containers: []
	W1208 01:55:33.911880 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:33.911925 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:33.911937 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:33.939161 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:33.939188 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:33.997922 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:33.997962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:34.019097 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:34.019129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:34.086047 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:34.076333    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.077036    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.078821    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.079347    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:34.081184    4143 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:34.086070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:34.086081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:36.616392 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:36.627074 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:36.627155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:36.655354 1136586 cri.go:89] found id: ""
	I1208 01:55:36.655378 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.655545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:36.655552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:36.655616 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:36.684592 1136586 cri.go:89] found id: ""
	I1208 01:55:36.684615 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.684623 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:36.684629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:36.684693 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:36.715198 1136586 cri.go:89] found id: ""
	I1208 01:55:36.715224 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.715233 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:36.715240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:36.715304 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:36.744302 1136586 cri.go:89] found id: ""
	I1208 01:55:36.744327 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.744337 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:36.744343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:36.744405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:36.769612 1136586 cri.go:89] found id: ""
	I1208 01:55:36.769637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.769646 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:36.769652 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:36.769712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:36.796116 1136586 cri.go:89] found id: ""
	I1208 01:55:36.796138 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.796147 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:36.796153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:36.796212 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:36.824398 1136586 cri.go:89] found id: ""
	I1208 01:55:36.824424 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.824433 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:36.824439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:36.824543 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:36.849915 1136586 cri.go:89] found id: ""
	I1208 01:55:36.849942 1136586 logs.go:282] 0 containers: []
	W1208 01:55:36.849951 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:36.849960 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:36.849972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:36.904949 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:36.904986 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:36.919890 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:36.919919 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:36.983074 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:36.974477    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.975033    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.976856    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.977264    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:36.978951    4249 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:36.983095 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:36.983111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:37.008505 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:37.008605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.548042 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:39.558613 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:39.558684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:39.582845 1136586 cri.go:89] found id: ""
	I1208 01:55:39.582870 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.582878 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:39.582885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:39.582946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:39.607991 1136586 cri.go:89] found id: ""
	I1208 01:55:39.608016 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.608025 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:39.608032 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:39.608094 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:39.633661 1136586 cri.go:89] found id: ""
	I1208 01:55:39.633685 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.633694 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:39.633701 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:39.633765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:39.658962 1136586 cri.go:89] found id: ""
	I1208 01:55:39.658989 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.658998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:39.659005 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:39.659064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:39.684407 1136586 cri.go:89] found id: ""
	I1208 01:55:39.684490 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.684514 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:39.684534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:39.684622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:39.715084 1136586 cri.go:89] found id: ""
	I1208 01:55:39.715109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.715118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:39.715125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:39.715191 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:39.740328 1136586 cri.go:89] found id: ""
	I1208 01:55:39.740352 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.740361 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:39.740368 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:39.740457 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:39.771393 1136586 cri.go:89] found id: ""
	I1208 01:55:39.771420 1136586 logs.go:282] 0 containers: []
	W1208 01:55:39.771429 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:39.771438 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:39.771450 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:39.797255 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:39.797291 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:39.826926 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:39.826954 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:39.882889 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:39.882925 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:39.898019 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:39.898048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:39.963174 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:39.954059    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.954638    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.956325    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.957071    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:39.958660    4373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.463393 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:42.473927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:42.474000 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:42.499722 1136586 cri.go:89] found id: ""
	I1208 01:55:42.499747 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.499757 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:42.499764 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:42.499842 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:42.525555 1136586 cri.go:89] found id: ""
	I1208 01:55:42.525637 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.525664 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:42.525671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:42.525745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:42.551105 1136586 cri.go:89] found id: ""
	I1208 01:55:42.551135 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.551144 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:42.551156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:42.551217 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:42.576427 1136586 cri.go:89] found id: ""
	I1208 01:55:42.576500 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.576515 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:42.576522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:42.576587 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:42.606069 1136586 cri.go:89] found id: ""
	I1208 01:55:42.606102 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.606111 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:42.606118 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:42.606190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:42.631166 1136586 cri.go:89] found id: ""
	I1208 01:55:42.631193 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.631202 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:42.631208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:42.631267 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:42.655160 1136586 cri.go:89] found id: ""
	I1208 01:55:42.655238 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.655255 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:42.655266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:42.655329 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:42.680010 1136586 cri.go:89] found id: ""
	I1208 01:55:42.680085 1136586 logs.go:282] 0 containers: []
	W1208 01:55:42.680100 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:42.680111 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:42.680124 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:42.695151 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:42.695175 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:42.763022 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:42.754197    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.755084    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.756850    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.757467    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:42.759030    4473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:42.763046 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:42.763059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:42.788301 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:42.788337 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:42.823956 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:42.823981 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.380090 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:45.395413 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:45.395485 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:45.439897 1136586 cri.go:89] found id: ""
	I1208 01:55:45.439925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.439935 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:45.439942 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:45.440007 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:45.465988 1136586 cri.go:89] found id: ""
	I1208 01:55:45.466012 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.466020 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:45.466027 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:45.466099 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:45.491807 1136586 cri.go:89] found id: ""
	I1208 01:55:45.491834 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.491843 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:45.491850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:45.491913 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:45.516818 1136586 cri.go:89] found id: ""
	I1208 01:55:45.516843 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.516854 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:45.516861 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:45.516921 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:45.542497 1136586 cri.go:89] found id: ""
	I1208 01:55:45.542522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.542531 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:45.542538 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:45.542609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:45.568083 1136586 cri.go:89] found id: ""
	I1208 01:55:45.568109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.568118 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:45.568125 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:45.568183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:45.593517 1136586 cri.go:89] found id: ""
	I1208 01:55:45.593544 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.593554 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:45.593561 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:45.593674 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:45.618329 1136586 cri.go:89] found id: ""
	I1208 01:55:45.618356 1136586 logs.go:282] 0 containers: []
	W1208 01:55:45.618366 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:45.618375 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:45.618387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:45.682426 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:45.674188    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.674739    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676256    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.676719    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:45.678224    4584 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:45.682475 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:45.682489 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:45.708017 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:45.708054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:45.737945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:45.737975 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:45.793795 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:45.793830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.309212 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:48.320148 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:48.320220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:48.367705 1136586 cri.go:89] found id: ""
	I1208 01:55:48.367730 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.367739 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:48.367745 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:48.367804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:48.421729 1136586 cri.go:89] found id: ""
	I1208 01:55:48.421754 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.421763 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:48.421769 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:48.421827 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:48.447771 1136586 cri.go:89] found id: ""
	I1208 01:55:48.447795 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.447804 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:48.447810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:48.447869 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:48.473161 1136586 cri.go:89] found id: ""
	I1208 01:55:48.473187 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.473196 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:48.473203 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:48.473265 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:48.498698 1136586 cri.go:89] found id: ""
	I1208 01:55:48.498723 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.498732 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:48.498738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:48.498798 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:48.527882 1136586 cri.go:89] found id: ""
	I1208 01:55:48.527908 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.527918 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:48.527925 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:48.528028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:48.554285 1136586 cri.go:89] found id: ""
	I1208 01:55:48.554311 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.554319 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:48.554326 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:48.554385 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:48.580502 1136586 cri.go:89] found id: ""
	I1208 01:55:48.580529 1136586 logs.go:282] 0 containers: []
	W1208 01:55:48.580538 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:48.580548 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:48.580580 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:48.610294 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:48.610319 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:48.665141 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:48.665179 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:48.682234 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:48.682262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:48.759351 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:48.750087    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.750965    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.751912    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.753542    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:48.754136    4712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:48.759375 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:48.759387 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:51.285923 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:51.298330 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:51.298405 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:51.324185 1136586 cri.go:89] found id: ""
	I1208 01:55:51.324212 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.324220 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:51.324227 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:51.324289 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:51.373377 1136586 cri.go:89] found id: ""
	I1208 01:55:51.373405 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.373414 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:51.373421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:51.373482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:51.433499 1136586 cri.go:89] found id: ""
	I1208 01:55:51.433522 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.433531 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:51.433537 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:51.433595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:51.458517 1136586 cri.go:89] found id: ""
	I1208 01:55:51.458543 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.458552 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:51.458558 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:51.458622 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:51.488348 1136586 cri.go:89] found id: ""
	I1208 01:55:51.488373 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.488382 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:51.488389 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:51.488471 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:51.513083 1136586 cri.go:89] found id: ""
	I1208 01:55:51.513109 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.513119 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:51.513126 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:51.513190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:51.537741 1136586 cri.go:89] found id: ""
	I1208 01:55:51.537785 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.537804 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:51.537811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:51.537886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:51.563745 1136586 cri.go:89] found id: ""
	I1208 01:55:51.563769 1136586 logs.go:282] 0 containers: []
	W1208 01:55:51.563777 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:51.563786 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:51.563797 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:51.594103 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:51.594137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:51.650065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:51.650099 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:51.665199 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:51.665275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:51.732191 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:51.724269    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.724970    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726434    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.726795    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:51.728304    4823 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:51.732221 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:51.732235 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.259222 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:54.271505 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:54.271585 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:54.300828 1136586 cri.go:89] found id: ""
	I1208 01:55:54.300860 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.300869 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:54.300875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:54.300944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:54.326203 1136586 cri.go:89] found id: ""
	I1208 01:55:54.326235 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.326245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:54.326251 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:54.326319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:54.392508 1136586 cri.go:89] found id: ""
	I1208 01:55:54.392537 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.392557 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:54.392564 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:54.392631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:54.443370 1136586 cri.go:89] found id: ""
	I1208 01:55:54.443403 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.443413 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:54.443419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:54.443479 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:54.471931 1136586 cri.go:89] found id: ""
	I1208 01:55:54.471996 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.472011 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:54.472018 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:54.472080 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:54.497863 1136586 cri.go:89] found id: ""
	I1208 01:55:54.497888 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.497897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:54.497905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:54.497966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:54.522372 1136586 cri.go:89] found id: ""
	I1208 01:55:54.522398 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.522408 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:54.522415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:54.522500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:54.549239 1136586 cri.go:89] found id: ""
	I1208 01:55:54.549266 1136586 logs.go:282] 0 containers: []
	W1208 01:55:54.549275 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:54.549284 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:54.549316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:54.612864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:54.604382    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.605110    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.606733    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.607295    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:54.608865    4915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:55:54.612887 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:54.612900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:54.639721 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:54.639758 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:54.671819 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:54.671845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:54.734691 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:54.734736 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.251176 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:55:57.261934 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:55:57.262008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:55:57.287436 1136586 cri.go:89] found id: ""
	I1208 01:55:57.287460 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.287469 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:55:57.287476 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:55:57.287538 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:55:57.313930 1136586 cri.go:89] found id: ""
	I1208 01:55:57.313953 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.313962 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:55:57.313968 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:55:57.314028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:55:57.340222 1136586 cri.go:89] found id: ""
	I1208 01:55:57.340245 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.340254 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:55:57.340260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:55:57.340321 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:55:57.380005 1136586 cri.go:89] found id: ""
	I1208 01:55:57.380028 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.380037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:55:57.380044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:55:57.380111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:55:57.421841 1136586 cri.go:89] found id: ""
	I1208 01:55:57.421863 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.421871 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:55:57.421877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:55:57.421935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:55:57.456549 1136586 cri.go:89] found id: ""
	I1208 01:55:57.456579 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.456588 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:55:57.456594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:55:57.456656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:55:57.480374 1136586 cri.go:89] found id: ""
	I1208 01:55:57.480472 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.480487 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:55:57.480494 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:55:57.480567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:55:57.504897 1136586 cri.go:89] found id: ""
	I1208 01:55:57.504925 1136586 logs.go:282] 0 containers: []
	W1208 01:55:57.504935 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:55:57.504944 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:55:57.504955 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:55:57.530334 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:55:57.530377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:55:57.561764 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:55:57.561791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:55:57.620753 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:55:57.620788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:55:57.636064 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:55:57.636155 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:55:57.701326 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:55:57.693243    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.694039    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695592    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.695921    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:55:57.697403    5047 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.203093 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:00.255847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:00.255935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:00.303978 1136586 cri.go:89] found id: ""
	I1208 01:56:00.304070 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.304095 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:00.304117 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:00.304214 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:00.413194 1136586 cri.go:89] found id: ""
	I1208 01:56:00.413283 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.413307 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:00.413328 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:00.413451 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:00.536345 1136586 cri.go:89] found id: ""
	I1208 01:56:00.536426 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.536462 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:00.536495 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:00.536582 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:00.570659 1136586 cri.go:89] found id: ""
	I1208 01:56:00.570746 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.570873 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:00.570915 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:00.571047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:00.600506 1136586 cri.go:89] found id: ""
	I1208 01:56:00.600542 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.600552 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:00.600559 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:00.600627 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:00.628998 1136586 cri.go:89] found id: ""
	I1208 01:56:00.629028 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.629037 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:00.629045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:00.629113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:00.655017 1136586 cri.go:89] found id: ""
	I1208 01:56:00.655055 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.655066 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:00.655073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:00.655136 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:00.687531 1136586 cri.go:89] found id: ""
	I1208 01:56:00.687555 1136586 logs.go:282] 0 containers: []
	W1208 01:56:00.687589 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:00.687601 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:00.687621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:00.716787 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:00.716826 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:00.773133 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:00.773171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:00.788167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:00.788194 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:00.851515 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:00.842694    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.843297    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.844838    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.845268    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:00.846892    5154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:00.851539 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:00.851553 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.378410 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:03.388811 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:03.388882 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:03.416483 1136586 cri.go:89] found id: ""
	I1208 01:56:03.416508 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.416517 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:03.416523 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:03.416584 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:03.444854 1136586 cri.go:89] found id: ""
	I1208 01:56:03.444879 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.444889 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:03.444896 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:03.444957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:03.471069 1136586 cri.go:89] found id: ""
	I1208 01:56:03.471096 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.471106 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:03.471113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:03.471174 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:03.497488 1136586 cri.go:89] found id: ""
	I1208 01:56:03.497516 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.497525 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:03.497532 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:03.497592 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:03.523459 1136586 cri.go:89] found id: ""
	I1208 01:56:03.523485 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.523494 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:03.523501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:03.523564 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:03.553004 1136586 cri.go:89] found id: ""
	I1208 01:56:03.553030 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.553038 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:03.553045 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:03.553104 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:03.582299 1136586 cri.go:89] found id: ""
	I1208 01:56:03.582325 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.582334 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:03.582340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:03.582398 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:03.628970 1136586 cri.go:89] found id: ""
	I1208 01:56:03.629036 1136586 logs.go:282] 0 containers: []
	W1208 01:56:03.629057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:03.629078 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:03.629116 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:03.693550 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:03.693861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:03.725106 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:03.725132 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:03.797949 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:03.789559    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.790067    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.791636    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.792114    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:03.793692    5254 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:03.797973 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:03.797985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:03.822975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:03.823012 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:06.351834 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:06.362738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:06.362832 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:06.388195 1136586 cri.go:89] found id: ""
	I1208 01:56:06.388222 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.388231 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:06.388238 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:06.388305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:06.413430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.413536 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.413559 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:06.413580 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:06.413657 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:06.438706 1136586 cri.go:89] found id: ""
	I1208 01:56:06.438770 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.438794 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:06.438813 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:06.438893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:06.463796 1136586 cri.go:89] found id: ""
	I1208 01:56:06.463860 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.463883 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:06.463902 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:06.463979 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:06.493653 1136586 cri.go:89] found id: ""
	I1208 01:56:06.493719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.493743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:06.493761 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:06.493839 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:06.518393 1136586 cri.go:89] found id: ""
	I1208 01:56:06.518490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.518516 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:06.518540 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:06.518628 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:06.547357 1136586 cri.go:89] found id: ""
	I1208 01:56:06.547423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.547444 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:06.547464 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:06.547537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:06.572430 1136586 cri.go:89] found id: ""
	I1208 01:56:06.572460 1136586 logs.go:282] 0 containers: []
	W1208 01:56:06.572469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:06.572479 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:06.572520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:06.631771 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:06.631805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:06.648910 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:06.648992 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:06.719373 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:06.710549    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.711634    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.712364    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.713608    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:06.714264    5365 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:06.719447 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:06.719474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:06.744508 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:06.744540 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:09.275604 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:09.286432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:09.286521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:09.312708 1136586 cri.go:89] found id: ""
	I1208 01:56:09.312733 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.312742 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:09.312749 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:09.312809 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:09.341427 1136586 cri.go:89] found id: ""
	I1208 01:56:09.341452 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.341461 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:09.341468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:09.341533 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:09.364765 1136586 cri.go:89] found id: ""
	I1208 01:56:09.364791 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.364801 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:09.364808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:09.364871 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:09.390922 1136586 cri.go:89] found id: ""
	I1208 01:56:09.390950 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.390959 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:09.390965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:09.391027 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:09.415255 1136586 cri.go:89] found id: ""
	I1208 01:56:09.415279 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.415288 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:09.415294 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:09.415351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:09.443874 1136586 cri.go:89] found id: ""
	I1208 01:56:09.443898 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.443907 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:09.443913 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:09.443973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:09.473821 1136586 cri.go:89] found id: ""
	I1208 01:56:09.473846 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.473855 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:09.473862 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:09.473920 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:09.502023 1136586 cri.go:89] found id: ""
	I1208 01:56:09.502048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:09.502057 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:09.502066 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:09.502077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:09.557585 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:09.557621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:09.572644 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:09.572673 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:09.660866 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:09.652629    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.653404    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.654983    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.655317    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:09.656808    5480 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:09.660889 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:09.660902 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:09.687200 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:09.687238 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:12.215648 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:12.227315 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:12.227391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:12.254343 1136586 cri.go:89] found id: ""
	I1208 01:56:12.254369 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.254378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:12.254385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:12.254467 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:12.279481 1136586 cri.go:89] found id: ""
	I1208 01:56:12.279550 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.279574 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:12.279594 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:12.279683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:12.305844 1136586 cri.go:89] found id: ""
	I1208 01:56:12.305910 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.305933 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:12.305951 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:12.306041 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:12.330060 1136586 cri.go:89] found id: ""
	I1208 01:56:12.330139 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.330162 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:12.330181 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:12.330273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:12.356745 1136586 cri.go:89] found id: ""
	I1208 01:56:12.356813 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.356840 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:12.356858 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:12.356943 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:12.386368 1136586 cri.go:89] found id: ""
	I1208 01:56:12.386475 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.386492 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:12.386500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:12.386563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:12.412659 1136586 cri.go:89] found id: ""
	I1208 01:56:12.412685 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.412694 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:12.412700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:12.412779 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:12.440569 1136586 cri.go:89] found id: ""
	I1208 01:56:12.440596 1136586 logs.go:282] 0 containers: []
	W1208 01:56:12.440604 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:12.440615 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:12.440626 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:12.496637 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:12.496674 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:12.511594 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:12.511624 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:12.580748 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:12.572628    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.573299    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.574862    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.575300    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:12.576848    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:12.580771 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:12.580784 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:12.613723 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:12.613802 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.152673 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:15.163614 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:15.163688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:15.192414 1136586 cri.go:89] found id: ""
	I1208 01:56:15.192449 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.192458 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:15.192465 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:15.192537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:15.219157 1136586 cri.go:89] found id: ""
	I1208 01:56:15.219182 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.219191 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:15.219198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:15.219258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:15.244756 1136586 cri.go:89] found id: ""
	I1208 01:56:15.244824 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.244839 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:15.244846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:15.244907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:15.271473 1136586 cri.go:89] found id: ""
	I1208 01:56:15.271546 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.271562 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:15.271569 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:15.271637 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:15.297385 1136586 cri.go:89] found id: ""
	I1208 01:56:15.297411 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.297430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:15.297437 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:15.297506 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:15.323057 1136586 cri.go:89] found id: ""
	I1208 01:56:15.323127 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.323149 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:15.323158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:15.323226 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:15.348696 1136586 cri.go:89] found id: ""
	I1208 01:56:15.348771 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.348788 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:15.348795 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:15.348857 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:15.373461 1136586 cri.go:89] found id: ""
	I1208 01:56:15.373483 1136586 logs.go:282] 0 containers: []
	W1208 01:56:15.373491 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:15.373500 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:15.373512 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:15.403816 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:15.403845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:15.463833 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:15.463875 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:15.479494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:15.479522 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:15.551161 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:15.541601    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.542208    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544100    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.544802    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:15.546578    5723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:15.551185 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:15.551199 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.077116 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:18.087881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:18.087956 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:18.116452 1136586 cri.go:89] found id: ""
	I1208 01:56:18.116480 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.116490 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:18.116497 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:18.116558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:18.147311 1136586 cri.go:89] found id: ""
	I1208 01:56:18.147339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.147347 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:18.147353 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:18.147415 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:18.173654 1136586 cri.go:89] found id: ""
	I1208 01:56:18.173680 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.173689 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:18.173695 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:18.173754 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:18.198118 1136586 cri.go:89] found id: ""
	I1208 01:56:18.198142 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.198151 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:18.198158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:18.198220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:18.229347 1136586 cri.go:89] found id: ""
	I1208 01:56:18.229371 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.229379 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:18.229385 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:18.229443 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:18.253505 1136586 cri.go:89] found id: ""
	I1208 01:56:18.253528 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.253536 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:18.253542 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:18.253601 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:18.279471 1136586 cri.go:89] found id: ""
	I1208 01:56:18.279496 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.279506 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:18.279513 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:18.279571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:18.309796 1136586 cri.go:89] found id: ""
	I1208 01:56:18.309819 1136586 logs.go:282] 0 containers: []
	W1208 01:56:18.309827 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:18.309839 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:18.309850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:18.366744 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:18.366779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:18.381719 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:18.381749 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:18.448045 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:18.439257    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.440577    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.441122    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.442737    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:18.443195    5826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:18.448070 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:18.448082 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:18.473293 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:18.473332 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.004404 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:21.017333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:21.017424 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:21.042756 1136586 cri.go:89] found id: ""
	I1208 01:56:21.042823 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.042839 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:21.042847 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:21.042907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:21.068017 1136586 cri.go:89] found id: ""
	I1208 01:56:21.068042 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.068051 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:21.068057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:21.068134 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:21.095695 1136586 cri.go:89] found id: ""
	I1208 01:56:21.095719 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.095729 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:21.095735 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:21.095833 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:21.126473 1136586 cri.go:89] found id: ""
	I1208 01:56:21.126499 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.126508 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:21.126515 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:21.126578 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:21.159320 1136586 cri.go:89] found id: ""
	I1208 01:56:21.159344 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.159354 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:21.159360 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:21.159421 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:21.189716 1136586 cri.go:89] found id: ""
	I1208 01:56:21.189740 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.189790 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:21.189808 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:21.189875 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:21.215065 1136586 cri.go:89] found id: ""
	I1208 01:56:21.215090 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.215099 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:21.215105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:21.215186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:21.239527 1136586 cri.go:89] found id: ""
	I1208 01:56:21.239551 1136586 logs.go:282] 0 containers: []
	W1208 01:56:21.239559 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:21.239568 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:21.239581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:21.303585 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:21.294718    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.295614    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297248    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.297562    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:21.299092    5932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:21.303607 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:21.303622 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:21.329232 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:21.329269 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:21.357399 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:21.357429 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:21.413905 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:21.413941 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:23.930606 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:23.941524 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:23.941609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:23.969400 1136586 cri.go:89] found id: ""
	I1208 01:56:23.969431 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.969441 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:23.969447 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:23.969510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:23.999105 1136586 cri.go:89] found id: ""
	I1208 01:56:23.999131 1136586 logs.go:282] 0 containers: []
	W1208 01:56:23.999140 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:23.999147 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:23.999216 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:24.031489 1136586 cri.go:89] found id: ""
	I1208 01:56:24.031517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.031527 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:24.031533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:24.031598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:24.057876 1136586 cri.go:89] found id: ""
	I1208 01:56:24.057902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.057911 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:24.057917 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:24.057978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:24.092220 1136586 cri.go:89] found id: ""
	I1208 01:56:24.092247 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.092257 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:24.092263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:24.092324 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:24.125261 1136586 cri.go:89] found id: ""
	I1208 01:56:24.125289 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.125298 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:24.125306 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:24.125367 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:24.153744 1136586 cri.go:89] found id: ""
	I1208 01:56:24.153772 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.153782 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:24.153789 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:24.153852 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:24.179839 1136586 cri.go:89] found id: ""
	I1208 01:56:24.179866 1136586 logs.go:282] 0 containers: []
	W1208 01:56:24.179875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:24.179884 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:24.179916 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:24.237479 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:24.237514 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:24.252654 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:24.252693 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:24.325211 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:24.316319    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.317231    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319042    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.319691    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:24.321351    6050 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:24.325232 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:24.325244 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:24.351049 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:24.351084 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:26.879645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:26.891936 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:26.892009 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:26.916974 1136586 cri.go:89] found id: ""
	I1208 01:56:26.916998 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.917007 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:26.917013 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:26.917072 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:26.941861 1136586 cri.go:89] found id: ""
	I1208 01:56:26.941885 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.941894 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:26.941900 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:26.941963 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:26.974560 1136586 cri.go:89] found id: ""
	I1208 01:56:26.974587 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.974596 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:26.974602 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:26.974663 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:26.999892 1136586 cri.go:89] found id: ""
	I1208 01:56:26.999921 1136586 logs.go:282] 0 containers: []
	W1208 01:56:26.999930 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:26.999937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:27.000021 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:27.030397 1136586 cri.go:89] found id: ""
	I1208 01:56:27.030421 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.030430 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:27.030436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:27.030521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:27.059896 1136586 cri.go:89] found id: ""
	I1208 01:56:27.059923 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.059932 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:27.059941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:27.059999 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:27.084629 1136586 cri.go:89] found id: ""
	I1208 01:56:27.084656 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.084665 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:27.084671 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:27.084733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:27.119162 1136586 cri.go:89] found id: ""
	I1208 01:56:27.119185 1136586 logs.go:282] 0 containers: []
	W1208 01:56:27.119193 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:27.119202 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:27.119213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:27.179450 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:27.179487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:27.194459 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:27.194486 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:27.261775 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:27.253462    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.254126    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.255852    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.256341    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:27.257897    6163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:27.261797 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:27.261810 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:27.287303 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:27.287338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:29.820302 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:29.830851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:29.830917 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:29.869685 1136586 cri.go:89] found id: ""
	I1208 01:56:29.869717 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.869726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:29.869733 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:29.869789 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:29.904021 1136586 cri.go:89] found id: ""
	I1208 01:56:29.904048 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.904057 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:29.904063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:29.904122 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:29.929826 1136586 cri.go:89] found id: ""
	I1208 01:56:29.929854 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.929864 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:29.929870 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:29.929935 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:29.954915 1136586 cri.go:89] found id: ""
	I1208 01:56:29.954939 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.954947 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:29.954954 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:29.955013 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:29.980194 1136586 cri.go:89] found id: ""
	I1208 01:56:29.980218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:29.980227 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:29.980233 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:29.980296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:30.034520 1136586 cri.go:89] found id: ""
	I1208 01:56:30.034556 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.034566 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:30.034573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:30.034648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:30.069395 1136586 cri.go:89] found id: ""
	I1208 01:56:30.069422 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.069432 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:30.069439 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:30.069507 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:30.109430 1136586 cri.go:89] found id: ""
	I1208 01:56:30.109459 1136586 logs.go:282] 0 containers: []
	W1208 01:56:30.109469 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:30.109479 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:30.109491 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:30.146595 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:30.146631 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:30.206376 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:30.206419 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:30.225510 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:30.225621 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:30.296464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:30.287753    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.288259    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290021    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.290422    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:30.291920    6285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:30.296484 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:30.296497 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:32.823121 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:32.833454 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:32.833529 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:32.868695 1136586 cri.go:89] found id: ""
	I1208 01:56:32.868721 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.868740 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:32.868747 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:32.868821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:32.906232 1136586 cri.go:89] found id: ""
	I1208 01:56:32.906253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.906261 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:32.906267 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:32.906327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:32.932154 1136586 cri.go:89] found id: ""
	I1208 01:56:32.932181 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.932190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:32.932200 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:32.932262 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:32.957782 1136586 cri.go:89] found id: ""
	I1208 01:56:32.957805 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.957814 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:32.957821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:32.957886 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:32.983951 1136586 cri.go:89] found id: ""
	I1208 01:56:32.983978 1136586 logs.go:282] 0 containers: []
	W1208 01:56:32.983988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:32.983995 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:32.984057 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:33.011290 1136586 cri.go:89] found id: ""
	I1208 01:56:33.011316 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.011325 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:33.011340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:33.011410 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:33.038703 1136586 cri.go:89] found id: ""
	I1208 01:56:33.038726 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.038735 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:33.038741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:33.038799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:33.063041 1136586 cri.go:89] found id: ""
	I1208 01:56:33.063065 1136586 logs.go:282] 0 containers: []
	W1208 01:56:33.063074 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:33.063084 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:33.063115 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:33.078006 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:33.078036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:33.170567 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:33.159528    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.160460    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162035    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.162344    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:33.166573    6380 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:33.170591 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:33.170607 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:33.196077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:33.196111 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:33.227121 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:33.227152 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:35.783290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:35.793700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:35.793778 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:35.821903 1136586 cri.go:89] found id: ""
	I1208 01:56:35.821937 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.821946 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:35.821953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:35.822014 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:35.854878 1136586 cri.go:89] found id: ""
	I1208 01:56:35.854902 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.854910 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:35.854916 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:35.854978 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:35.881395 1136586 cri.go:89] found id: ""
	I1208 01:56:35.881418 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.881426 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:35.881432 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:35.881490 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:35.910658 1136586 cri.go:89] found id: ""
	I1208 01:56:35.910679 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.910688 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:35.910694 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:35.910753 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:35.939089 1136586 cri.go:89] found id: ""
	I1208 01:56:35.939114 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.939129 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:35.939137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:35.939199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:35.964135 1136586 cri.go:89] found id: ""
	I1208 01:56:35.964158 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.964166 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:35.964173 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:35.964235 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:35.990669 1136586 cri.go:89] found id: ""
	I1208 01:56:35.990692 1136586 logs.go:282] 0 containers: []
	W1208 01:56:35.990701 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:35.990707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:35.990770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:36.020165 1136586 cri.go:89] found id: ""
	I1208 01:56:36.020191 1136586 logs.go:282] 0 containers: []
	W1208 01:56:36.020207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:36.020217 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:36.020228 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:36.076411 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:36.076452 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:36.093602 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:36.093683 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:36.181516 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:36.171688    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.172566    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.174406    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.175409    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:36.177007    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:36.181540 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:36.181552 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:36.207107 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:36.207142 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:38.735690 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:38.746691 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:38.746767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:38.773309 1136586 cri.go:89] found id: ""
	I1208 01:56:38.773339 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.773349 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:38.773356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:38.773423 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:38.801208 1136586 cri.go:89] found id: ""
	I1208 01:56:38.801235 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.801245 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:38.801254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:38.801317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:38.826539 1136586 cri.go:89] found id: ""
	I1208 01:56:38.826566 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.826575 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:38.826582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:38.826642 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:38.865488 1136586 cri.go:89] found id: ""
	I1208 01:56:38.865517 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.865527 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:38.865533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:38.865594 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:38.900627 1136586 cri.go:89] found id: ""
	I1208 01:56:38.900655 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.900664 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:38.900670 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:38.900733 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:38.927847 1136586 cri.go:89] found id: ""
	I1208 01:56:38.927871 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.927880 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:38.927887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:38.927949 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:38.952594 1136586 cri.go:89] found id: ""
	I1208 01:56:38.952666 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.952689 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:38.952714 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:38.952803 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:38.978089 1136586 cri.go:89] found id: ""
	I1208 01:56:38.978116 1136586 logs.go:282] 0 containers: []
	W1208 01:56:38.978125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:38.978134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:38.978147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:39.047378 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:39.038982    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.039639    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041190    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.041763    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:39.042893    6603 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:39.047401 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:39.047414 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:39.073359 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:39.073402 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:39.112761 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:39.112796 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:39.176177 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:39.176214 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:41.692238 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:41.702585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:41.702656 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:41.726879 1136586 cri.go:89] found id: ""
	I1208 01:56:41.726913 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.726923 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:41.726930 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:41.726996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:41.752119 1136586 cri.go:89] found id: ""
	I1208 01:56:41.752143 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.752152 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:41.752158 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:41.752215 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:41.777446 1136586 cri.go:89] found id: ""
	I1208 01:56:41.777473 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.777482 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:41.777488 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:41.777548 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:41.804077 1136586 cri.go:89] found id: ""
	I1208 01:56:41.804103 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.804112 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:41.804119 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:41.804179 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:41.828883 1136586 cri.go:89] found id: ""
	I1208 01:56:41.828908 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.828917 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:41.828924 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:41.828987 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:41.875100 1136586 cri.go:89] found id: ""
	I1208 01:56:41.875128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.875138 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:41.875145 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:41.875204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:41.907099 1136586 cri.go:89] found id: ""
	I1208 01:56:41.907126 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.907136 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:41.907142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:41.907201 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:41.936702 1136586 cri.go:89] found id: ""
	I1208 01:56:41.936729 1136586 logs.go:282] 0 containers: []
	W1208 01:56:41.936738 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:41.936748 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:41.936780 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:41.992993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:41.993029 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:42.008895 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:42.008988 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:42.090561 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:42.072542    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.073325    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.082968    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.083440    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:42.085181    6726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:42.090592 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:42.090605 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:42.127950 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:42.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:44.678288 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:44.690356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:44.690429 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:44.716072 1136586 cri.go:89] found id: ""
	I1208 01:56:44.716095 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.716105 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:44.716111 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:44.716173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:44.742318 1136586 cri.go:89] found id: ""
	I1208 01:56:44.742347 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.742357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:44.742363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:44.742428 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:44.768786 1136586 cri.go:89] found id: ""
	I1208 01:56:44.768814 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.768824 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:44.768830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:44.768892 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:44.794997 1136586 cri.go:89] found id: ""
	I1208 01:56:44.795020 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.795028 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:44.795035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:44.795093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:44.824626 1136586 cri.go:89] found id: ""
	I1208 01:56:44.824693 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.824719 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:44.824738 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:44.824823 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:44.854631 1136586 cri.go:89] found id: ""
	I1208 01:56:44.854660 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.854682 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:44.854707 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:44.854790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:44.886832 1136586 cri.go:89] found id: ""
	I1208 01:56:44.886853 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.886862 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:44.886868 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:44.886931 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:44.918383 1136586 cri.go:89] found id: ""
	I1208 01:56:44.918409 1136586 logs.go:282] 0 containers: []
	W1208 01:56:44.918420 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:44.918430 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:44.918441 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:44.974124 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:44.974160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:44.989499 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:44.989581 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:45.183353 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:45.161074    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.162567    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.163658    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.177046    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:45.178164    6840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:45.183384 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:45.183415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:45.225041 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:45.225130 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:47.776374 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:47.786874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:47.786944 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:47.817071 1136586 cri.go:89] found id: ""
	I1208 01:56:47.817097 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.817106 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:47.817113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:47.817173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:47.848935 1136586 cri.go:89] found id: ""
	I1208 01:56:47.848964 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.848972 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:47.848978 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:47.849039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:47.879145 1136586 cri.go:89] found id: ""
	I1208 01:56:47.879175 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.879190 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:47.879196 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:47.879255 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:47.919571 1136586 cri.go:89] found id: ""
	I1208 01:56:47.919595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.919605 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:47.919612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:47.919678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:47.945072 1136586 cri.go:89] found id: ""
	I1208 01:56:47.945098 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.945107 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:47.945113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:47.945176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:47.972399 1136586 cri.go:89] found id: ""
	I1208 01:56:47.972423 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.972432 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:47.972446 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:47.972513 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:47.998198 1136586 cri.go:89] found id: ""
	I1208 01:56:47.998225 1136586 logs.go:282] 0 containers: []
	W1208 01:56:47.998234 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:47.998240 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:47.998357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:48.026417 1136586 cri.go:89] found id: ""
	I1208 01:56:48.026469 1136586 logs.go:282] 0 containers: []
	W1208 01:56:48.026480 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:48.026514 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:48.026534 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:48.083726 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:48.083765 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:48.102473 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:48.102503 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:48.195413 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:48.186485    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.187327    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189193    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.189661    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:48.191269    6961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:48.195448 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:48.195461 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:48.222088 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:48.222125 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:50.752185 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:50.763217 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:50.763296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:50.792851 1136586 cri.go:89] found id: ""
	I1208 01:56:50.792877 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.792886 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:50.792893 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:50.792952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:50.818544 1136586 cri.go:89] found id: ""
	I1208 01:56:50.818573 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.818582 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:50.818590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:50.818653 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:50.856256 1136586 cri.go:89] found id: ""
	I1208 01:56:50.856286 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.856296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:50.856303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:50.856365 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:50.890254 1136586 cri.go:89] found id: ""
	I1208 01:56:50.890277 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.890286 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:50.890292 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:50.890351 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:50.919013 1136586 cri.go:89] found id: ""
	I1208 01:56:50.919039 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.919048 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:50.919054 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:50.919115 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:50.943865 1136586 cri.go:89] found id: ""
	I1208 01:56:50.943888 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.943897 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:50.943903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:50.943968 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:50.967885 1136586 cri.go:89] found id: ""
	I1208 01:56:50.967912 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.967921 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:50.967927 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:50.967984 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:50.997744 1136586 cri.go:89] found id: ""
	I1208 01:56:50.997779 1136586 logs.go:282] 0 containers: []
	W1208 01:56:50.997788 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:50.997854 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:50.997874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:51.066108 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:51.057282    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.058077    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.059667    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.060121    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:51.061658    7058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:51.066131 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:51.066144 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:51.092098 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:51.092134 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:51.129363 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:51.129392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:51.192049 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:51.192086 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:53.707235 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:53.718177 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:53.718245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:53.743649 1136586 cri.go:89] found id: ""
	I1208 01:56:53.743674 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.743684 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:53.743690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:53.743755 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:53.769475 1136586 cri.go:89] found id: ""
	I1208 01:56:53.769503 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.769512 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:53.769519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:53.769581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:53.795104 1136586 cri.go:89] found id: ""
	I1208 01:56:53.795128 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.795137 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:53.795143 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:53.795219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:53.824300 1136586 cri.go:89] found id: ""
	I1208 01:56:53.824322 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.824335 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:53.824342 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:53.824403 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:53.858957 1136586 cri.go:89] found id: ""
	I1208 01:56:53.858984 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.858993 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:53.858999 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:53.859059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:53.889936 1136586 cri.go:89] found id: ""
	I1208 01:56:53.889958 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.889967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:53.889974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:53.890042 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:53.917197 1136586 cri.go:89] found id: ""
	I1208 01:56:53.917221 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.917230 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:53.917236 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:53.917301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:53.944246 1136586 cri.go:89] found id: ""
	I1208 01:56:53.944313 1136586 logs.go:282] 0 containers: []
	W1208 01:56:53.944340 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:53.944364 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:53.944395 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:54.000224 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:54.000263 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:54.018576 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:54.018610 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:54.091957 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:54.080746    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.081281    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083079    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.083716    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:54.085255    7176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:54.092037 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:54.092064 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:54.121226 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:54.121262 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.665113 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:56.675727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:56.675793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:56.702486 1136586 cri.go:89] found id: ""
	I1208 01:56:56.702512 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.702521 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:56.702536 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:56.702595 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:56.727464 1136586 cri.go:89] found id: ""
	I1208 01:56:56.727490 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.727499 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:56.727506 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:56.727574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:56.755210 1136586 cri.go:89] found id: ""
	I1208 01:56:56.755242 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.755252 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:56.755259 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:56.755317 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:56.780366 1136586 cri.go:89] found id: ""
	I1208 01:56:56.780394 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.780403 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:56.780409 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:56.780502 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:56.805514 1136586 cri.go:89] found id: ""
	I1208 01:56:56.805541 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.805551 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:56.805557 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:56.805615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:56.830960 1136586 cri.go:89] found id: ""
	I1208 01:56:56.830985 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.830994 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:56.831001 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:56.831067 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:56.877742 1136586 cri.go:89] found id: ""
	I1208 01:56:56.877812 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.877847 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:56.877873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:56.877969 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:56.909088 1136586 cri.go:89] found id: ""
	I1208 01:56:56.909173 1136586 logs.go:282] 0 containers: []
	W1208 01:56:56.909197 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:56.909218 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:56:56.909261 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:56:56.937087 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:56.937122 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:56.964566 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:56.964593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:57.025871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:57.025917 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:57.041167 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:57.041200 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:56:57.113620 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:56:57.102983    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.103546    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105231    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.105847    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:56:57.108853    7302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:56:59.615300 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:56:59.625998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:56:59.626071 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:56:59.651013 1136586 cri.go:89] found id: ""
	I1208 01:56:59.651040 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.651050 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:56:59.651058 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:56:59.651140 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:56:59.676526 1136586 cri.go:89] found id: ""
	I1208 01:56:59.676595 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.676619 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:56:59.676632 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:56:59.676706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:56:59.705956 1136586 cri.go:89] found id: ""
	I1208 01:56:59.705982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.705992 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:56:59.705998 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:56:59.706058 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:56:59.732960 1136586 cri.go:89] found id: ""
	I1208 01:56:59.732988 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.732998 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:56:59.733004 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:56:59.733064 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:56:59.761227 1136586 cri.go:89] found id: ""
	I1208 01:56:59.761253 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.761262 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:56:59.761268 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:56:59.761332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:56:59.795189 1136586 cri.go:89] found id: ""
	I1208 01:56:59.795218 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.795227 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:56:59.795235 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:56:59.795296 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:56:59.820209 1136586 cri.go:89] found id: ""
	I1208 01:56:59.820278 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.820303 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:56:59.820317 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:56:59.820397 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:56:59.854906 1136586 cri.go:89] found id: ""
	I1208 01:56:59.854982 1136586 logs.go:282] 0 containers: []
	W1208 01:56:59.855003 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:56:59.855031 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:56:59.855075 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:56:59.895804 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:56:59.895880 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:56:59.953038 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:56:59.953076 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:56:59.968348 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:56:59.968383 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:00.183275 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:00.153410    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.154498    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.155550    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.156552    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:00.157518    7415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:00.183303 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:00.183318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:02.767941 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:02.778692 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:02.778767 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:02.804099 1136586 cri.go:89] found id: ""
	I1208 01:57:02.804168 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.804192 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:02.804207 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:02.804282 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:02.829415 1136586 cri.go:89] found id: ""
	I1208 01:57:02.829442 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.829451 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:02.829456 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:02.829516 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:02.876418 1136586 cri.go:89] found id: ""
	I1208 01:57:02.876448 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.876456 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:02.876462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:02.876521 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:02.908999 1136586 cri.go:89] found id: ""
	I1208 01:57:02.909021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.909030 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:02.909036 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:02.909095 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:02.935740 1136586 cri.go:89] found id: ""
	I1208 01:57:02.935763 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.935772 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:02.935781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:02.935845 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:02.962615 1136586 cri.go:89] found id: ""
	I1208 01:57:02.962640 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.962649 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:02.962676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:02.962762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:02.988338 1136586 cri.go:89] found id: ""
	I1208 01:57:02.988413 1136586 logs.go:282] 0 containers: []
	W1208 01:57:02.988447 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:02.988469 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:02.988563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:03.016087 1136586 cri.go:89] found id: ""
	I1208 01:57:03.016115 1136586 logs.go:282] 0 containers: []
	W1208 01:57:03.016125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:03.016135 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:03.016147 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:03.045768 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:03.045798 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:03.103820 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:03.103856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:03.119506 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:03.119544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:03.188553 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:03.180378    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.180829    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182530    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.182890    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:03.184520    7527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:03.188577 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:03.188591 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.714622 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:05.728070 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:05.728144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:05.752683 1136586 cri.go:89] found id: ""
	I1208 01:57:05.752709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.752718 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:05.752725 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:05.752804 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:05.777888 1136586 cri.go:89] found id: ""
	I1208 01:57:05.777926 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.777935 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:05.777941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:05.778004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:05.803200 1136586 cri.go:89] found id: ""
	I1208 01:57:05.803227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.803236 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:05.803243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:05.803305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:05.828694 1136586 cri.go:89] found id: ""
	I1208 01:57:05.828719 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.828728 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:05.828734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:05.828795 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:05.871706 1136586 cri.go:89] found id: ""
	I1208 01:57:05.871734 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.871743 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:05.871750 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:05.871810 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:05.910109 1136586 cri.go:89] found id: ""
	I1208 01:57:05.910130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.910139 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:05.910146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:05.910211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:05.935420 1136586 cri.go:89] found id: ""
	I1208 01:57:05.935446 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.935455 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:05.935463 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:05.935524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:05.964805 1136586 cri.go:89] found id: ""
	I1208 01:57:05.964830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:05.964840 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:05.964850 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:05.964861 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:05.991812 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:05.991850 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:06.023289 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:06.023318 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:06.079947 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:06.079984 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:06.094973 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:06.095001 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:06.164494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:06.154632    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.155375    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.157475    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.158920    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:06.159484    7643 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:08.664783 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:08.675873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:08.675951 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:08.701544 1136586 cri.go:89] found id: ""
	I1208 01:57:08.701570 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.701579 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:08.701585 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:08.701644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:08.726739 1136586 cri.go:89] found id: ""
	I1208 01:57:08.726761 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.726770 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:08.726777 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:08.726834 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:08.752551 1136586 cri.go:89] found id: ""
	I1208 01:57:08.752579 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.752590 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:08.752596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:08.752661 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:08.785394 1136586 cri.go:89] found id: ""
	I1208 01:57:08.785418 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.785427 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:08.785434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:08.785494 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:08.809379 1136586 cri.go:89] found id: ""
	I1208 01:57:08.809411 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.809420 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:08.809426 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:08.809493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:08.834793 1136586 cri.go:89] found id: ""
	I1208 01:57:08.834820 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.834829 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:08.834836 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:08.834895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:08.871040 1136586 cri.go:89] found id: ""
	I1208 01:57:08.871067 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.871077 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:08.871083 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:08.871149 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:08.898916 1136586 cri.go:89] found id: ""
	I1208 01:57:08.898943 1136586 logs.go:282] 0 containers: []
	W1208 01:57:08.898953 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:08.898961 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:08.898973 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:08.958751 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:08.958791 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:08.975804 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:08.975842 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:09.045728 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:09.036794    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.037578    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039382    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.039918    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:09.041609    7741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:09.045754 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:09.045768 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:09.071802 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:09.071844 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:11.602631 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:11.621366 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:11.621447 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:11.654343 1136586 cri.go:89] found id: ""
	I1208 01:57:11.654378 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.654387 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:11.654396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:11.654496 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:11.687384 1136586 cri.go:89] found id: ""
	I1208 01:57:11.687421 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.687431 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:11.687444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:11.687515 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:11.716671 1136586 cri.go:89] found id: ""
	I1208 01:57:11.716709 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.716720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:11.716726 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:11.716796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:11.742357 1136586 cri.go:89] found id: ""
	I1208 01:57:11.742391 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.742400 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:11.742407 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:11.742493 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:11.768963 1136586 cri.go:89] found id: ""
	I1208 01:57:11.768990 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.768999 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:11.769006 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:11.769075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:11.793322 1136586 cri.go:89] found id: ""
	I1208 01:57:11.793354 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.793364 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:11.793371 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:11.793438 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:11.819428 1136586 cri.go:89] found id: ""
	I1208 01:57:11.819473 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.819483 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:11.819490 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:11.819561 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:11.856579 1136586 cri.go:89] found id: ""
	I1208 01:57:11.856620 1136586 logs.go:282] 0 containers: []
	W1208 01:57:11.856629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:11.856639 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:11.856650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:11.920066 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:11.920104 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:11.936490 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:11.936579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:12.003301 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:11.992791    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.993553    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995210    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.995907    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:11.997606    7854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:12.003353 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:12.003368 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:12.034123 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:12.034162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:14.566675 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:14.577850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:14.577926 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:14.614645 1136586 cri.go:89] found id: ""
	I1208 01:57:14.614674 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.614683 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:14.614689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:14.614746 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:14.653668 1136586 cri.go:89] found id: ""
	I1208 01:57:14.653689 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.653698 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:14.653704 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:14.653760 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:14.683123 1136586 cri.go:89] found id: ""
	I1208 01:57:14.683147 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.683155 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:14.683162 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:14.683220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:14.712290 1136586 cri.go:89] found id: ""
	I1208 01:57:14.712317 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.712326 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:14.712333 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:14.712411 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:14.741728 1136586 cri.go:89] found id: ""
	I1208 01:57:14.741752 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.741761 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:14.741768 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:14.741830 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:14.766640 1136586 cri.go:89] found id: ""
	I1208 01:57:14.766675 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.766684 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:14.766690 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:14.766749 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:14.795809 1136586 cri.go:89] found id: ""
	I1208 01:57:14.795833 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.795843 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:14.795850 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:14.795908 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:14.824523 1136586 cri.go:89] found id: ""
	I1208 01:57:14.824546 1136586 logs.go:282] 0 containers: []
	W1208 01:57:14.824555 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:14.824564 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:14.824579 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:14.883992 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:14.884032 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:14.899927 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:14.899958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:14.971584 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:14.962953    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.963354    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965054    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.965873    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:14.967129    7965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:14.971605 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:14.971618 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:14.997478 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:14.997516 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.562433 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:17.573169 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:17.573243 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:17.604838 1136586 cri.go:89] found id: ""
	I1208 01:57:17.604866 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.604879 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:17.604885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:17.604945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:17.651166 1136586 cri.go:89] found id: ""
	I1208 01:57:17.651193 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.651202 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:17.651208 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:17.651275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:17.679266 1136586 cri.go:89] found id: ""
	I1208 01:57:17.679302 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.679312 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:17.679318 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:17.679379 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:17.703476 1136586 cri.go:89] found id: ""
	I1208 01:57:17.703504 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.703513 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:17.703519 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:17.703579 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:17.732349 1136586 cri.go:89] found id: ""
	I1208 01:57:17.732377 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.732386 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:17.732393 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:17.732461 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:17.761008 1136586 cri.go:89] found id: ""
	I1208 01:57:17.761033 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.761042 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:17.761053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:17.761112 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:17.789502 1136586 cri.go:89] found id: ""
	I1208 01:57:17.789527 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.789536 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:17.789543 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:17.789599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:17.814915 1136586 cri.go:89] found id: ""
	I1208 01:57:17.814938 1136586 logs.go:282] 0 containers: []
	W1208 01:57:17.814947 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:17.814958 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:17.814971 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:17.901464 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:17.890645    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.891350    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893042    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.893390    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:17.894876    8072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:17.901483 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:17.901496 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:17.927699 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:17.927737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:17.956480 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:17.956506 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:18.016061 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:18.016103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.532462 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:20.543127 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:20.543203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:20.568124 1136586 cri.go:89] found id: ""
	I1208 01:57:20.568149 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.568158 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:20.568167 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:20.568227 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:20.603985 1136586 cri.go:89] found id: ""
	I1208 01:57:20.604021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.604030 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:20.604037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:20.604106 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:20.636556 1136586 cri.go:89] found id: ""
	I1208 01:57:20.636588 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.636597 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:20.636603 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:20.636671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:20.672751 1136586 cri.go:89] found id: ""
	I1208 01:57:20.672825 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.672860 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:20.672885 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:20.672980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:20.701486 1136586 cri.go:89] found id: ""
	I1208 01:57:20.701557 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.701593 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:20.701617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:20.701708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:20.727838 1136586 cri.go:89] found id: ""
	I1208 01:57:20.727863 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.727873 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:20.727897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:20.727958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:20.757101 1136586 cri.go:89] found id: ""
	I1208 01:57:20.757126 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.757135 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:20.757142 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:20.757204 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:20.786936 1136586 cri.go:89] found id: ""
	I1208 01:57:20.786961 1136586 logs.go:282] 0 containers: []
	W1208 01:57:20.786970 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:20.786981 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:20.786995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:20.801478 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:20.801508 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:20.873983 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:20.862883    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.865869    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.866497    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868082    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:20.868569    8185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:20.874054 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:20.874087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:20.901450 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:20.901529 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:20.934263 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:20.934288 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.489851 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:23.500424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:23.500500 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:23.526190 1136586 cri.go:89] found id: ""
	I1208 01:57:23.526216 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.526225 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:23.526232 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:23.526294 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:23.552764 1136586 cri.go:89] found id: ""
	I1208 01:57:23.552790 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.552799 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:23.552806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:23.552868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:23.577380 1136586 cri.go:89] found id: ""
	I1208 01:57:23.577406 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.577414 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:23.577421 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:23.577481 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:23.608802 1136586 cri.go:89] found id: ""
	I1208 01:57:23.608830 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.608839 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:23.608846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:23.608910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:23.634994 1136586 cri.go:89] found id: ""
	I1208 01:57:23.635020 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.635029 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:23.635035 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:23.635096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:23.663236 1136586 cri.go:89] found id: ""
	I1208 01:57:23.663261 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.663270 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:23.663277 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:23.663350 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:23.688872 1136586 cri.go:89] found id: ""
	I1208 01:57:23.688898 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.688907 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:23.688914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:23.688973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:23.714286 1136586 cri.go:89] found id: ""
	I1208 01:57:23.714312 1136586 logs.go:282] 0 containers: []
	W1208 01:57:23.714320 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:23.714329 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:23.714345 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:23.742945 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:23.742972 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:23.798260 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:23.798300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:23.813312 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:23.813340 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:23.892723 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:23.883927    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.884764    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886336    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.886667    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:23.888696    8309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:23.892748 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:23.892762 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.422664 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:26.433380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:26.433455 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:26.465015 1136586 cri.go:89] found id: ""
	I1208 01:57:26.465039 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.465048 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:26.465055 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:26.465113 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:26.493403 1136586 cri.go:89] found id: ""
	I1208 01:57:26.493429 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.493438 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:26.493449 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:26.493537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:26.519773 1136586 cri.go:89] found id: ""
	I1208 01:57:26.519799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.519814 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:26.519821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:26.519883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:26.548992 1136586 cri.go:89] found id: ""
	I1208 01:57:26.549025 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.549037 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:26.549047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:26.549127 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:26.574005 1136586 cri.go:89] found id: ""
	I1208 01:57:26.574031 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.574041 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:26.574047 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:26.574111 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:26.609416 1136586 cri.go:89] found id: ""
	I1208 01:57:26.609443 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.609452 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:26.609459 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:26.609517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:26.640996 1136586 cri.go:89] found id: ""
	I1208 01:57:26.641021 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.641031 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:26.641037 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:26.641096 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:26.667832 1136586 cri.go:89] found id: ""
	I1208 01:57:26.667861 1136586 logs.go:282] 0 containers: []
	W1208 01:57:26.667870 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:26.667880 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:26.667911 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:26.727920 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:26.727958 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:26.743134 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:26.743167 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:26.805654 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:26.797405    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.798207    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.799707    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.800178    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:26.801717    8412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:26.805676 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:26.805689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:26.833117 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:26.833153 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.374479 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:29.385263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:29.385343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:29.411850 1136586 cri.go:89] found id: ""
	I1208 01:57:29.411881 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.411890 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:29.411897 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:29.411957 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:29.436577 1136586 cri.go:89] found id: ""
	I1208 01:57:29.436650 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.436667 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:29.436674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:29.436741 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:29.461265 1136586 cri.go:89] found id: ""
	I1208 01:57:29.461287 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.461296 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:29.461302 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:29.461375 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:29.485998 1136586 cri.go:89] found id: ""
	I1208 01:57:29.486024 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.486033 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:29.486039 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:29.486102 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:29.515456 1136586 cri.go:89] found id: ""
	I1208 01:57:29.515482 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.515491 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:29.515498 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:29.515574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:29.540631 1136586 cri.go:89] found id: ""
	I1208 01:57:29.540658 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.540667 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:29.540674 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:29.540771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:29.569112 1136586 cri.go:89] found id: ""
	I1208 01:57:29.569156 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.569182 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:29.569194 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:29.569276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:29.601158 1136586 cri.go:89] found id: ""
	I1208 01:57:29.601182 1136586 logs.go:282] 0 containers: []
	W1208 01:57:29.601192 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:29.601201 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:29.601213 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:29.681907 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:29.673858    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.674481    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676004    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.676507    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:29.677918    8518 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:29.681933 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:29.681946 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:29.707746 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:29.707781 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:29.740008 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:29.740036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:29.795859 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:29.795893 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.311192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:32.322374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:32.322487 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:32.352628 1136586 cri.go:89] found id: ""
	I1208 01:57:32.352653 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.352662 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:32.352668 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:32.352727 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:32.379283 1136586 cri.go:89] found id: ""
	I1208 01:57:32.379308 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.379317 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:32.379323 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:32.379383 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:32.405884 1136586 cri.go:89] found id: ""
	I1208 01:57:32.405911 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.405919 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:32.405926 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:32.405985 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:32.431914 1136586 cri.go:89] found id: ""
	I1208 01:57:32.431939 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.431948 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:32.431958 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:32.432019 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:32.456763 1136586 cri.go:89] found id: ""
	I1208 01:57:32.456791 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.456799 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:32.456806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:32.456868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:32.482420 1136586 cri.go:89] found id: ""
	I1208 01:57:32.482467 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.482476 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:32.482483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:32.482550 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:32.507167 1136586 cri.go:89] found id: ""
	I1208 01:57:32.507201 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.507210 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:32.507218 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:32.507281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:32.532583 1136586 cri.go:89] found id: ""
	I1208 01:57:32.532612 1136586 logs.go:282] 0 containers: []
	W1208 01:57:32.532621 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:32.532630 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:32.532642 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:32.562135 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:32.562163 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:32.619510 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:32.619544 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:32.636767 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:32.636845 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:32.721264 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:32.711000    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.711813    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.713759    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.714144    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:32.715680    8644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:32.721287 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:32.721300 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:35.247026 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:35.260135 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:35.260203 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:35.288106 1136586 cri.go:89] found id: ""
	I1208 01:57:35.288130 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.288138 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:35.288146 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:35.288206 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:35.314646 1136586 cri.go:89] found id: ""
	I1208 01:57:35.314672 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.314682 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:35.314689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:35.314777 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:35.342658 1136586 cri.go:89] found id: ""
	I1208 01:57:35.342685 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.342693 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:35.342700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:35.342762 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:35.367839 1136586 cri.go:89] found id: ""
	I1208 01:57:35.367862 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.367870 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:35.367877 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:35.367937 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:35.392345 1136586 cri.go:89] found id: ""
	I1208 01:57:35.392419 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.392449 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:35.392461 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:35.392525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:35.417214 1136586 cri.go:89] found id: ""
	I1208 01:57:35.417241 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.417250 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:35.417257 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:35.417318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:35.444512 1136586 cri.go:89] found id: ""
	I1208 01:57:35.444538 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.444546 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:35.444556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:35.444614 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:35.470153 1136586 cri.go:89] found id: ""
	I1208 01:57:35.470227 1136586 logs.go:282] 0 containers: []
	W1208 01:57:35.470250 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:35.470272 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:35.470310 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:35.497905 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:35.497934 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:35.553331 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:35.553369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:35.568215 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:35.568246 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:35.665180 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:35.653188    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.653886    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.656478    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.658920    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:35.660611    8749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:35.665205 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:35.665219 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.193386 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:38.204636 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:38.204720 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:38.230690 1136586 cri.go:89] found id: ""
	I1208 01:57:38.230717 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.230726 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:38.230732 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:38.230791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:38.255363 1136586 cri.go:89] found id: ""
	I1208 01:57:38.255385 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.255394 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:38.255401 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:38.255460 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:38.282875 1136586 cri.go:89] found id: ""
	I1208 01:57:38.282899 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.282907 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:38.282914 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:38.282980 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:38.308397 1136586 cri.go:89] found id: ""
	I1208 01:57:38.308422 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.308437 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:38.308443 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:38.308505 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:38.334844 1136586 cri.go:89] found id: ""
	I1208 01:57:38.334871 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.334880 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:38.334886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:38.334945 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:38.360635 1136586 cri.go:89] found id: ""
	I1208 01:57:38.360659 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.360669 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:38.360676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:38.360737 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:38.385673 1136586 cri.go:89] found id: ""
	I1208 01:57:38.385702 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.385710 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:38.385717 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:38.385776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:38.410525 1136586 cri.go:89] found id: ""
	I1208 01:57:38.410560 1136586 logs.go:282] 0 containers: []
	W1208 01:57:38.410569 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:38.410578 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:38.410589 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:38.467839 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:38.467874 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:38.482720 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:38.482748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:38.547244 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:38.539050    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.539588    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541229    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.541656    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:38.543152    8853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:38.547268 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:38.547282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:38.573312 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:38.573350 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.116290 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:41.132190 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:41.132273 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:41.164024 1136586 cri.go:89] found id: ""
	I1208 01:57:41.164049 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.164058 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:41.164064 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:41.164126 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:41.190343 1136586 cri.go:89] found id: ""
	I1208 01:57:41.190380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.190390 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:41.190396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:41.190480 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:41.215567 1136586 cri.go:89] found id: ""
	I1208 01:57:41.215591 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.215600 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:41.215607 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:41.215712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:41.241307 1136586 cri.go:89] found id: ""
	I1208 01:57:41.241380 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.241404 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:41.241424 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:41.241510 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:41.266598 1136586 cri.go:89] found id: ""
	I1208 01:57:41.266666 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.266682 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:41.266689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:41.266748 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:41.292745 1136586 cri.go:89] found id: ""
	I1208 01:57:41.292806 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.292833 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:41.292851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:41.292947 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:41.322477 1136586 cri.go:89] found id: ""
	I1208 01:57:41.322503 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.322528 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:41.322534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:41.322598 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:41.348001 1136586 cri.go:89] found id: ""
	I1208 01:57:41.348028 1136586 logs.go:282] 0 containers: []
	W1208 01:57:41.348037 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:41.348047 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:41.348059 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:41.413651 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:41.404826    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.405621    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407398    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.407998    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:41.409733    8958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:41.413677 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:41.413690 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:41.443591 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:41.443637 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:41.475807 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:41.475839 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:41.531946 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:41.531985 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.047381 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:44.058560 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:44.058632 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:44.086946 1136586 cri.go:89] found id: ""
	I1208 01:57:44.086974 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.086983 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:44.086990 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:44.087055 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:44.119808 1136586 cri.go:89] found id: ""
	I1208 01:57:44.119837 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.119846 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:44.119853 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:44.119914 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:44.151166 1136586 cri.go:89] found id: ""
	I1208 01:57:44.151189 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.151197 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:44.151204 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:44.151266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:44.179208 1136586 cri.go:89] found id: ""
	I1208 01:57:44.179232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.179240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:44.179247 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:44.179307 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:44.204931 1136586 cri.go:89] found id: ""
	I1208 01:57:44.204957 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.204967 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:44.204973 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:44.205086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:44.233222 1136586 cri.go:89] found id: ""
	I1208 01:57:44.233263 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.233289 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:44.233303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:44.233381 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:44.258112 1136586 cri.go:89] found id: ""
	I1208 01:57:44.258180 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.258204 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:44.258225 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:44.258301 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:44.282317 1136586 cri.go:89] found id: ""
	I1208 01:57:44.282339 1136586 logs.go:282] 0 containers: []
	W1208 01:57:44.282348 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:44.282358 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:44.282369 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:44.337431 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:44.337465 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:44.352560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:44.352633 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:44.416710 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:44.408693    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.409087    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.410732    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.411301    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:44.412835    9081 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:44.416734 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:44.416745 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:44.443231 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:44.443264 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:46.971715 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:46.982590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:46.982716 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:47.013622 1136586 cri.go:89] found id: ""
	I1208 01:57:47.013655 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.013665 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:47.013689 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:47.013773 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:47.039262 1136586 cri.go:89] found id: ""
	I1208 01:57:47.039288 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.039298 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:47.039305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:47.039369 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:47.064571 1136586 cri.go:89] found id: ""
	I1208 01:57:47.064597 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.064606 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:47.064612 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:47.064671 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:47.103360 1136586 cri.go:89] found id: ""
	I1208 01:57:47.103428 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.103452 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:47.103471 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:47.103558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:47.137446 1136586 cri.go:89] found id: ""
	I1208 01:57:47.137514 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.137537 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:47.137556 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:47.137643 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:47.167484 1136586 cri.go:89] found id: ""
	I1208 01:57:47.167507 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.167515 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:47.167522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:47.167581 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:47.198040 1136586 cri.go:89] found id: ""
	I1208 01:57:47.198072 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.198082 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:47.198088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:47.198155 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:47.222585 1136586 cri.go:89] found id: ""
	I1208 01:57:47.222609 1136586 logs.go:282] 0 containers: []
	W1208 01:57:47.222618 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:47.222635 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:47.222648 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:47.253438 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:47.253468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:47.312655 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:47.312692 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:47.328066 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:47.328146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:47.396328 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:47.386568    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.387104    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.388891    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.389497    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:47.391083    9200 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:47.396351 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:47.396365 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:49.922587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:49.933241 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:49.933357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:49.957944 1136586 cri.go:89] found id: ""
	I1208 01:57:49.957967 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.957976 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:49.957983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:49.958043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:49.983531 1136586 cri.go:89] found id: ""
	I1208 01:57:49.983556 1136586 logs.go:282] 0 containers: []
	W1208 01:57:49.983565 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:49.983573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:49.983634 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:50.014921 1136586 cri.go:89] found id: ""
	I1208 01:57:50.014948 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.014958 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:50.014965 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:50.015054 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:50.051300 1136586 cri.go:89] found id: ""
	I1208 01:57:50.051356 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.051365 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:50.051373 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:50.051439 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:50.078205 1136586 cri.go:89] found id: ""
	I1208 01:57:50.078232 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.078242 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:50.078248 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:50.078313 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:50.116415 1136586 cri.go:89] found id: ""
	I1208 01:57:50.116472 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.116482 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:50.116489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:50.116549 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:50.152924 1136586 cri.go:89] found id: ""
	I1208 01:57:50.152953 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.152962 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:50.152971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:50.153034 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:50.183266 1136586 cri.go:89] found id: ""
	I1208 01:57:50.183303 1136586 logs.go:282] 0 containers: []
	W1208 01:57:50.183313 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:50.183323 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:50.183339 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:50.219490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:50.219518 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:50.278125 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:50.278160 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:50.293360 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:50.293392 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:50.361099 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:50.352253    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.353435    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.354998    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.355436    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:50.357086    9312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:50.361124 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:50.361137 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:52.887762 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:52.898605 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:52.898684 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:52.924862 1136586 cri.go:89] found id: ""
	I1208 01:57:52.924888 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.924898 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:52.924904 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:52.924967 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:52.953738 1136586 cri.go:89] found id: ""
	I1208 01:57:52.953766 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.953775 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:52.953781 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:52.953841 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:52.979112 1136586 cri.go:89] found id: ""
	I1208 01:57:52.979135 1136586 logs.go:282] 0 containers: []
	W1208 01:57:52.979143 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:52.979156 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:52.979220 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:53.010105 1136586 cri.go:89] found id: ""
	I1208 01:57:53.010136 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.010146 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:53.010153 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:53.010224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:53.040709 1136586 cri.go:89] found id: ""
	I1208 01:57:53.040737 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.040746 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:53.040759 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:53.040820 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:53.066591 1136586 cri.go:89] found id: ""
	I1208 01:57:53.066615 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.066624 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:53.066631 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:53.066690 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:53.103691 1136586 cri.go:89] found id: ""
	I1208 01:57:53.103721 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.103730 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:53.103737 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:53.103796 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:53.135825 1136586 cri.go:89] found id: ""
	I1208 01:57:53.135860 1136586 logs.go:282] 0 containers: []
	W1208 01:57:53.135869 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:53.135879 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:53.135892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:53.154871 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:53.154897 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:53.223770 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:53.214735    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.215381    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217145    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.217709    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:53.219315    9411 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:53.223803 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:53.223818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:53.248879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:53.248912 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:53.278989 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:53.279015 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:55.836344 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:55.851014 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:55.851088 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:55.880945 1136586 cri.go:89] found id: ""
	I1208 01:57:55.880968 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.880977 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:55.880983 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:55.881047 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:55.918324 1136586 cri.go:89] found id: ""
	I1208 01:57:55.918348 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.918357 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:55.918363 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:55.918420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:55.943772 1136586 cri.go:89] found id: ""
	I1208 01:57:55.943799 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.943808 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:55.943814 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:55.943872 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:55.968672 1136586 cri.go:89] found id: ""
	I1208 01:57:55.968695 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.968705 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:55.968711 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:55.968772 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:55.993546 1136586 cri.go:89] found id: ""
	I1208 01:57:55.993573 1136586 logs.go:282] 0 containers: []
	W1208 01:57:55.993582 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:55.993588 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:55.993648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:56.026891 1136586 cri.go:89] found id: ""
	I1208 01:57:56.026916 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.026924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:56.026931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:56.026998 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:56.053302 1136586 cri.go:89] found id: ""
	I1208 01:57:56.053334 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.053344 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:56.053356 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:56.053468 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:56.079706 1136586 cri.go:89] found id: ""
	I1208 01:57:56.079733 1136586 logs.go:282] 0 containers: []
	W1208 01:57:56.079741 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:56.079750 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:56.079761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:56.142320 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:56.142357 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:56.157995 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:56.158067 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:56.221039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:56.213240    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.213839    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215294    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.215694    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:56.217124    9525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:56.221063 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:56.221077 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:56.247019 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:56.247058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:57:58.775233 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:57:58.785596 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:57:58.785682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:57:58.809955 1136586 cri.go:89] found id: ""
	I1208 01:57:58.809986 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.809996 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:57:58.810002 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:57:58.810061 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:57:58.835423 1136586 cri.go:89] found id: ""
	I1208 01:57:58.835447 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.835456 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:57:58.835462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:57:58.835524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:57:58.867905 1136586 cri.go:89] found id: ""
	I1208 01:57:58.867928 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.867937 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:57:58.867943 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:57:58.868003 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:57:58.896767 1136586 cri.go:89] found id: ""
	I1208 01:57:58.896794 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.896803 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:57:58.896810 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:57:58.896868 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:57:58.926611 1136586 cri.go:89] found id: ""
	I1208 01:57:58.926633 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.926642 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:57:58.926648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:57:58.926707 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:57:58.954977 1136586 cri.go:89] found id: ""
	I1208 01:57:58.955001 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.955010 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:57:58.955016 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:57:58.955075 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:57:58.984186 1136586 cri.go:89] found id: ""
	I1208 01:57:58.984209 1136586 logs.go:282] 0 containers: []
	W1208 01:57:58.984218 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:57:58.984224 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:57:58.984286 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:57:59.011291 1136586 cri.go:89] found id: ""
	I1208 01:57:59.011314 1136586 logs.go:282] 0 containers: []
	W1208 01:57:59.011323 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:57:59.011333 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:57:59.011346 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:57:59.067486 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:57:59.067520 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:57:59.082307 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:57:59.082334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:57:59.162802 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:57:59.150483    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.151404    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153152    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.153438    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:57:59.158584    9638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:57:59.162826 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:57:59.162838 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:57:59.187405 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:57:59.187437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:01.720540 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:01.731197 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:01.731266 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:01.756392 1136586 cri.go:89] found id: ""
	I1208 01:58:01.756414 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.756431 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:01.756438 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:01.756504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:01.782980 1136586 cri.go:89] found id: ""
	I1208 01:58:01.783050 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.783074 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:01.783099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:01.783180 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:01.808911 1136586 cri.go:89] found id: ""
	I1208 01:58:01.808947 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.808957 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:01.808964 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:01.809032 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:01.833417 1136586 cri.go:89] found id: ""
	I1208 01:58:01.833490 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.833514 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:01.833534 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:01.833644 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:01.863178 1136586 cri.go:89] found id: ""
	I1208 01:58:01.863255 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.863277 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:01.863296 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:01.863391 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:01.893466 1136586 cri.go:89] found id: ""
	I1208 01:58:01.893540 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.893562 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:01.893582 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:01.893669 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:01.927969 1136586 cri.go:89] found id: ""
	I1208 01:58:01.928046 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.928060 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:01.928067 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:01.928137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:01.954102 1136586 cri.go:89] found id: ""
	I1208 01:58:01.954130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:01.954141 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:01.954150 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:01.954162 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:02.011065 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:02.011103 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:02.028187 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:02.028220 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:02.092492 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:02.083984    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.084527    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086185    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.086757    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:02.088395    9753 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:02.092518 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:02.092532 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:02.123344 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:02.123377 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.657423 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:04.669705 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:04.669794 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:04.696818 1136586 cri.go:89] found id: ""
	I1208 01:58:04.696848 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.696857 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:04.696864 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:04.696973 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:04.723929 1136586 cri.go:89] found id: ""
	I1208 01:58:04.723951 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.723960 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:04.723967 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:04.724028 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:04.749688 1136586 cri.go:89] found id: ""
	I1208 01:58:04.749712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.749721 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:04.749727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:04.749790 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:04.780181 1136586 cri.go:89] found id: ""
	I1208 01:58:04.780212 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.780223 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:04.780230 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:04.780310 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:04.805904 1136586 cri.go:89] found id: ""
	I1208 01:58:04.805930 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.805941 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:04.805947 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:04.806004 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:04.830657 1136586 cri.go:89] found id: ""
	I1208 01:58:04.830682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.830692 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:04.830699 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:04.830765 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:04.870065 1136586 cri.go:89] found id: ""
	I1208 01:58:04.870130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.870152 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:04.870170 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:04.870263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:04.898118 1136586 cri.go:89] found id: ""
	I1208 01:58:04.898185 1136586 logs.go:282] 0 containers: []
	W1208 01:58:04.898207 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:04.898228 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:04.898266 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:04.931407 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:04.931433 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:04.987787 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:04.987825 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:05.003245 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:05.003331 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:05.079158 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:05.070381    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.071114    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.072989    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.073584    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:05.075042    9876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:05.079184 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:05.079196 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:07.607089 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:07.617881 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:07.617954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:07.643290 1136586 cri.go:89] found id: ""
	I1208 01:58:07.643356 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.643378 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:07.643396 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:07.643483 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:07.668986 1136586 cri.go:89] found id: ""
	I1208 01:58:07.669054 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.669078 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:07.669099 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:07.669190 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:07.703052 1136586 cri.go:89] found id: ""
	I1208 01:58:07.703077 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.703086 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:07.703093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:07.703153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:07.730752 1136586 cri.go:89] found id: ""
	I1208 01:58:07.730780 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.730791 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:07.730801 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:07.730864 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:07.757395 1136586 cri.go:89] found id: ""
	I1208 01:58:07.757420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.757429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:07.757442 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:07.757504 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:07.781922 1136586 cri.go:89] found id: ""
	I1208 01:58:07.781946 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.781955 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:07.781961 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:07.782020 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:07.806746 1136586 cri.go:89] found id: ""
	I1208 01:58:07.806769 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.806778 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:07.806785 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:07.806855 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:07.835050 1136586 cri.go:89] found id: ""
	I1208 01:58:07.835079 1136586 logs.go:282] 0 containers: []
	W1208 01:58:07.835088 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:07.835097 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:07.835110 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:07.898132 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:07.898165 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:07.918936 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:07.918964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:07.984291 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:07.975795    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.976627    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978225    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.978797    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:07.980321    9978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:07.984315 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:07.984328 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:08.010075 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:08.010113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:10.540471 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:10.551266 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:10.551338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:10.577175 1136586 cri.go:89] found id: ""
	I1208 01:58:10.577202 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.577212 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:10.577219 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:10.577281 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:10.602532 1136586 cri.go:89] found id: ""
	I1208 01:58:10.602567 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.602577 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:10.602584 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:10.602646 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:10.628758 1136586 cri.go:89] found id: ""
	I1208 01:58:10.628782 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.628790 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:10.628796 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:10.628860 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:10.658744 1136586 cri.go:89] found id: ""
	I1208 01:58:10.658767 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.658776 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:10.658783 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:10.658848 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:10.687442 1136586 cri.go:89] found id: ""
	I1208 01:58:10.687466 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.687475 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:10.687483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:10.687547 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:10.713454 1136586 cri.go:89] found id: ""
	I1208 01:58:10.713527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.713551 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:10.713573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:10.713662 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:10.738872 1136586 cri.go:89] found id: ""
	I1208 01:58:10.738896 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.738905 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:10.738912 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:10.739073 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:10.764935 1136586 cri.go:89] found id: ""
	I1208 01:58:10.764962 1136586 logs.go:282] 0 containers: []
	W1208 01:58:10.764972 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:10.764981 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:10.764995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:10.822530 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:10.822568 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:10.837607 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:10.837635 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:10.917003 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:10.907188   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.907604   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.908761   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910108   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:10.910871   10089 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:10.917024 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:10.917036 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:10.943077 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:10.943113 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:13.473561 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:13.484592 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:13.484660 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:13.511440 1136586 cri.go:89] found id: ""
	I1208 01:58:13.511463 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.511472 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:13.511478 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:13.511541 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:13.535634 1136586 cri.go:89] found id: ""
	I1208 01:58:13.535659 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.535668 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:13.535675 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:13.535734 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:13.560688 1136586 cri.go:89] found id: ""
	I1208 01:58:13.560712 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.560720 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:13.560727 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:13.560791 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:13.586137 1136586 cri.go:89] found id: ""
	I1208 01:58:13.586217 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.586240 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:13.586261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:13.586354 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:13.612353 1136586 cri.go:89] found id: ""
	I1208 01:58:13.612378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.612388 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:13.612394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:13.612466 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:13.642171 1136586 cri.go:89] found id: ""
	I1208 01:58:13.642198 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.642208 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:13.642215 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:13.642276 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:13.668409 1136586 cri.go:89] found id: ""
	I1208 01:58:13.668440 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.668448 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:13.668455 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:13.668537 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:13.701198 1136586 cri.go:89] found id: ""
	I1208 01:58:13.701223 1136586 logs.go:282] 0 containers: []
	W1208 01:58:13.701232 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:13.701240 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:13.701252 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:13.758303 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:13.758338 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:13.773305 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:13.773343 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:13.842494 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:13.831867   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.832590   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834278   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.834758   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:13.836399   10197 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:13.842521 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:13.842537 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:13.871092 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:13.871129 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:16.410612 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:16.421252 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:16.421335 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:16.448846 1136586 cri.go:89] found id: ""
	I1208 01:58:16.448872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.448880 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:16.448887 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:16.448954 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:16.478943 1136586 cri.go:89] found id: ""
	I1208 01:58:16.478968 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.478977 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:16.478984 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:16.479044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:16.504203 1136586 cri.go:89] found id: ""
	I1208 01:58:16.504230 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.504239 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:16.504245 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:16.504305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:16.531210 1136586 cri.go:89] found id: ""
	I1208 01:58:16.531238 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.531247 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:16.531254 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:16.531343 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:16.561091 1136586 cri.go:89] found id: ""
	I1208 01:58:16.561122 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.561130 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:16.561137 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:16.561199 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:16.586402 1136586 cri.go:89] found id: ""
	I1208 01:58:16.586427 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.586435 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:16.586462 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:16.586524 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:16.611837 1136586 cri.go:89] found id: ""
	I1208 01:58:16.611863 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.611873 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:16.611879 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:16.611961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:16.637357 1136586 cri.go:89] found id: ""
	I1208 01:58:16.637399 1136586 logs.go:282] 0 containers: []
	W1208 01:58:16.637408 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:16.637434 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:16.637468 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:16.692659 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:16.692739 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:16.709626 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:16.709655 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:16.785738 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:16.776953   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.777418   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779067   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.779838   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:16.781586   10312 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:16.785761 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:16.785774 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:16.811061 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:16.811096 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:19.346587 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:19.359091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:19.359159 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:19.395510 1136586 cri.go:89] found id: ""
	I1208 01:58:19.395536 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.395545 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:19.395551 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:19.395609 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:19.423019 1136586 cri.go:89] found id: ""
	I1208 01:58:19.423044 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.423053 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:19.423059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:19.423120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:19.449460 1136586 cri.go:89] found id: ""
	I1208 01:58:19.449487 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.449496 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:19.449503 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:19.449574 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:19.476285 1136586 cri.go:89] found id: ""
	I1208 01:58:19.476311 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.476320 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:19.476327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:19.476387 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:19.504576 1136586 cri.go:89] found id: ""
	I1208 01:58:19.504603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.504613 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:19.504620 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:19.504682 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:19.530968 1136586 cri.go:89] found id: ""
	I1208 01:58:19.530994 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.531015 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:19.531023 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:19.531092 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:19.555468 1136586 cri.go:89] found id: ""
	I1208 01:58:19.555492 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.555501 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:19.555508 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:19.555571 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:19.580667 1136586 cri.go:89] found id: ""
	I1208 01:58:19.580703 1136586 logs.go:282] 0 containers: []
	W1208 01:58:19.580716 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:19.580726 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:19.580737 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:19.638717 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:19.638754 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:19.653903 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:19.653935 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:19.721039 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:19.712404   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.713247   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.714932   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.715513   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:19.717086   10425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:19.721058 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:19.721071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:19.747016 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:19.747054 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.280191 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:22.290698 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:22.290771 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:22.319983 1136586 cri.go:89] found id: ""
	I1208 01:58:22.320007 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.320016 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:22.320022 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:22.320084 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:22.349912 1136586 cri.go:89] found id: ""
	I1208 01:58:22.349939 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.349949 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:22.349955 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:22.350016 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:22.381227 1136586 cri.go:89] found id: ""
	I1208 01:58:22.381253 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.381262 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:22.381269 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:22.381327 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:22.412055 1136586 cri.go:89] found id: ""
	I1208 01:58:22.412130 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.412143 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:22.412150 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:22.412219 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:22.437094 1136586 cri.go:89] found id: ""
	I1208 01:58:22.437169 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.437193 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:22.437214 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:22.437338 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:22.466779 1136586 cri.go:89] found id: ""
	I1208 01:58:22.466809 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.466817 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:22.466824 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:22.466888 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:22.492472 1136586 cri.go:89] found id: ""
	I1208 01:58:22.492555 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.492580 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:22.492599 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:22.492683 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:22.517817 1136586 cri.go:89] found id: ""
	I1208 01:58:22.517865 1136586 logs.go:282] 0 containers: []
	W1208 01:58:22.517875 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:22.517884 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:22.517896 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:22.533468 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:22.533495 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:22.600107 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:22.591549   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.592250   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594062   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.594494   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:22.596227   10537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:22.600132 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:22.600145 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:22.625768 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:22.625805 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:22.654249 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:22.654334 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.216756 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:25.228093 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:25.228171 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:25.254793 1136586 cri.go:89] found id: ""
	I1208 01:58:25.254820 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.254840 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:25.254848 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:25.254911 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:25.280729 1136586 cri.go:89] found id: ""
	I1208 01:58:25.280756 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.280765 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:25.280772 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:25.280856 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:25.306714 1136586 cri.go:89] found id: ""
	I1208 01:58:25.306786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.306802 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:25.306809 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:25.306883 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:25.333920 1136586 cri.go:89] found id: ""
	I1208 01:58:25.333955 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.333964 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:25.333971 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:25.334044 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:25.361369 1136586 cri.go:89] found id: ""
	I1208 01:58:25.361396 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.361405 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:25.361412 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:25.361486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:25.392931 1136586 cri.go:89] found id: ""
	I1208 01:58:25.392958 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.392967 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:25.392974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:25.393046 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:25.423143 1136586 cri.go:89] found id: ""
	I1208 01:58:25.423168 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.423177 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:25.423183 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:25.423245 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:25.452795 1136586 cri.go:89] found id: ""
	I1208 01:58:25.452872 1136586 logs.go:282] 0 containers: []
	W1208 01:58:25.452888 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:25.452899 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:25.452913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:25.479544 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:25.479585 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:25.510747 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:25.510777 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:25.566401 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:25.566437 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:25.581786 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:25.581816 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:25.653146 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:25.644228   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.645011   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.646682   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.647230   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:25.648941   10664 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.153984 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:28.164723 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:28.164793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:28.188760 1136586 cri.go:89] found id: ""
	I1208 01:58:28.188786 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.188796 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:28.188803 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:28.188865 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:28.213011 1136586 cri.go:89] found id: ""
	I1208 01:58:28.213037 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.213046 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:28.213053 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:28.213114 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:28.237473 1136586 cri.go:89] found id: ""
	I1208 01:58:28.237547 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.237559 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:28.237566 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:28.237692 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:28.264353 1136586 cri.go:89] found id: ""
	I1208 01:58:28.264378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.264387 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:28.264394 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:28.264478 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:28.289216 1136586 cri.go:89] found id: ""
	I1208 01:58:28.289250 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.289259 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:28.289265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:28.289332 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:28.314397 1136586 cri.go:89] found id: ""
	I1208 01:58:28.314431 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.314440 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:28.314480 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:28.314553 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:28.339256 1136586 cri.go:89] found id: ""
	I1208 01:58:28.339290 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.339299 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:28.339305 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:28.339372 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:28.376790 1136586 cri.go:89] found id: ""
	I1208 01:58:28.376824 1136586 logs.go:282] 0 containers: []
	W1208 01:58:28.376833 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:28.376842 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:28.376854 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:28.412562 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:28.412597 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:28.468784 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:28.468818 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:28.483513 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:28.483539 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:28.548999 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:28.540733   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.541130   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.542744   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.543481   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:28.545172   10774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:28.549069 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:28.549088 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.074358 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:31.085483 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:31.085557 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:31.113378 1136586 cri.go:89] found id: ""
	I1208 01:58:31.113404 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.113413 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:31.113419 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:31.113486 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:31.151500 1136586 cri.go:89] found id: ""
	I1208 01:58:31.151527 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.151537 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:31.151544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:31.151606 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:31.198664 1136586 cri.go:89] found id: ""
	I1208 01:58:31.198692 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.198701 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:31.198708 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:31.198770 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:31.225073 1136586 cri.go:89] found id: ""
	I1208 01:58:31.225100 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.225109 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:31.225115 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:31.225178 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:31.253221 1136586 cri.go:89] found id: ""
	I1208 01:58:31.253248 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.253256 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:31.253263 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:31.253328 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:31.278685 1136586 cri.go:89] found id: ""
	I1208 01:58:31.278715 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.278724 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:31.278731 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:31.278793 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:31.308014 1136586 cri.go:89] found id: ""
	I1208 01:58:31.308040 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.308050 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:31.308057 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:31.308118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:31.333618 1136586 cri.go:89] found id: ""
	I1208 01:58:31.333646 1136586 logs.go:282] 0 containers: []
	W1208 01:58:31.333655 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:31.333666 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:31.333677 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:31.360688 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:31.360767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:31.400673 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:31.400748 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:31.458405 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:31.458467 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:31.473371 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:31.473403 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:31.535352 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:31.527438   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.527848   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529393   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.529711   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:31.531184   10889 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.035643 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:34.047071 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:34.047236 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:34.072671 1136586 cri.go:89] found id: ""
	I1208 01:58:34.072696 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.072705 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:34.072712 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:34.072776 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:34.102807 1136586 cri.go:89] found id: ""
	I1208 01:58:34.102835 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.102844 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:34.102851 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:34.102910 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:34.129970 1136586 cri.go:89] found id: ""
	I1208 01:58:34.129998 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.130007 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:34.130017 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:34.130077 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:34.156982 1136586 cri.go:89] found id: ""
	I1208 01:58:34.157009 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.157019 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:34.157026 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:34.157086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:34.181976 1136586 cri.go:89] found id: ""
	I1208 01:58:34.182003 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.182013 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:34.182020 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:34.182081 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:34.206537 1136586 cri.go:89] found id: ""
	I1208 01:58:34.206615 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.206630 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:34.206638 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:34.206699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:34.236167 1136586 cri.go:89] found id: ""
	I1208 01:58:34.236192 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.236201 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:34.236210 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:34.236270 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:34.262308 1136586 cri.go:89] found id: ""
	I1208 01:58:34.262332 1136586 logs.go:282] 0 containers: []
	W1208 01:58:34.262341 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:34.262351 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:34.262363 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:34.317558 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:34.317593 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:34.332448 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:34.332475 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:34.412027 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:34.403876   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.404660   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406277   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.406619   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:34.408039   10988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:34.412050 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:34.412062 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:34.438062 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:34.438097 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:36.967795 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:36.978660 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:36.978730 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:37.012757 1136586 cri.go:89] found id: ""
	I1208 01:58:37.012787 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.012797 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:37.012804 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:37.012878 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:37.041663 1136586 cri.go:89] found id: ""
	I1208 01:58:37.041685 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.041693 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:37.041700 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:37.041758 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:37.066610 1136586 cri.go:89] found id: ""
	I1208 01:58:37.066694 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.066716 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:37.066734 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:37.066844 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:37.094085 1136586 cri.go:89] found id: ""
	I1208 01:58:37.094162 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.094187 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:37.094209 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:37.094319 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:37.132780 1136586 cri.go:89] found id: ""
	I1208 01:58:37.132864 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.132886 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:37.132905 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:37.133017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:37.169263 1136586 cri.go:89] found id: ""
	I1208 01:58:37.169340 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.169365 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:37.169386 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:37.169498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:37.194196 1136586 cri.go:89] found id: ""
	I1208 01:58:37.194275 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.194300 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:37.194319 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:37.194404 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:37.219299 1136586 cri.go:89] found id: ""
	I1208 01:58:37.219378 1136586 logs.go:282] 0 containers: []
	W1208 01:58:37.219415 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:37.219442 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:37.219469 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:37.274745 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:37.274782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:37.289751 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:37.289779 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:37.363255 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:37.352560   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.353342   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355038   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.355657   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:37.357229   11100 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:37.363297 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:37.363316 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:37.401496 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:37.401554 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:39.942202 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:39.953239 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:39.953312 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:39.978920 1136586 cri.go:89] found id: ""
	I1208 01:58:39.978943 1136586 logs.go:282] 0 containers: []
	W1208 01:58:39.978952 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:39.978959 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:39.979017 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:40.025284 1136586 cri.go:89] found id: ""
	I1208 01:58:40.025316 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.025343 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:40.025352 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:40.025427 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:40.067843 1136586 cri.go:89] found id: ""
	I1208 01:58:40.067869 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.067879 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:40.067886 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:40.067952 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:40.102669 1136586 cri.go:89] found id: ""
	I1208 01:58:40.102759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.102785 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:40.102806 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:40.102923 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:40.150768 1136586 cri.go:89] found id: ""
	I1208 01:58:40.150799 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.150809 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:40.150815 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:40.150881 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:40.179334 1136586 cri.go:89] found id: ""
	I1208 01:58:40.179362 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.179373 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:40.179382 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:40.179453 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:40.208035 1136586 cri.go:89] found id: ""
	I1208 01:58:40.208063 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.208072 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:40.208079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:40.208144 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:40.238244 1136586 cri.go:89] found id: ""
	I1208 01:58:40.238286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:40.238296 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:40.238306 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:40.238320 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:40.264240 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:40.264279 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:40.295875 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:40.295900 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:40.355993 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:40.356087 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:40.374494 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:40.374575 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:40.448504 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:40.440991   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.441508   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.442670   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.443116   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:40.444543   11227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:42.948778 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:42.959677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:42.959745 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:42.984449 1136586 cri.go:89] found id: ""
	I1208 01:58:42.984474 1136586 logs.go:282] 0 containers: []
	W1208 01:58:42.984483 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:42.984489 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:42.984555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:43.015138 1136586 cri.go:89] found id: ""
	I1208 01:58:43.015163 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.015172 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:43.015178 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:43.015242 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:43.040581 1136586 cri.go:89] found id: ""
	I1208 01:58:43.040608 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.040617 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:43.040623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:43.040685 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:43.066316 1136586 cri.go:89] found id: ""
	I1208 01:58:43.066345 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.066367 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:43.066374 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:43.066484 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:43.095034 1136586 cri.go:89] found id: ""
	I1208 01:58:43.095062 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.095071 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:43.095077 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:43.095137 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:43.129297 1136586 cri.go:89] found id: ""
	I1208 01:58:43.129323 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.129333 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:43.129340 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:43.129413 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:43.160843 1136586 cri.go:89] found id: ""
	I1208 01:58:43.160912 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.160929 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:43.160937 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:43.161012 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:43.189017 1136586 cri.go:89] found id: ""
	I1208 01:58:43.189043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:43.189051 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:43.189060 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:43.189071 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:43.245153 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:43.245189 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:43.260337 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:43.260380 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:43.329966 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:43.320329   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.321163   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.322928   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.323237   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:43.325298   11327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:43.329985 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:43.329998 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:43.357975 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:43.358058 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:45.892416 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:45.902821 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:45.902893 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:45.929257 1136586 cri.go:89] found id: ""
	I1208 01:58:45.929283 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.929292 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:45.929299 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:45.929357 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:45.954817 1136586 cri.go:89] found id: ""
	I1208 01:58:45.954851 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.954861 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:45.954867 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:45.954928 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:45.980153 1136586 cri.go:89] found id: ""
	I1208 01:58:45.980183 1136586 logs.go:282] 0 containers: []
	W1208 01:58:45.980196 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:45.980202 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:45.980263 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:46.009369 1136586 cri.go:89] found id: ""
	I1208 01:58:46.009398 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.009408 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:46.009415 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:46.009555 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:46.035686 1136586 cri.go:89] found id: ""
	I1208 01:58:46.035713 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.035736 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:46.035743 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:46.035815 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:46.065295 1136586 cri.go:89] found id: ""
	I1208 01:58:46.065327 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.065337 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:46.065344 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:46.065414 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:46.104678 1136586 cri.go:89] found id: ""
	I1208 01:58:46.104746 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.104769 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:46.104790 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:46.104877 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:46.134606 1136586 cri.go:89] found id: ""
	I1208 01:58:46.134682 1136586 logs.go:282] 0 containers: []
	W1208 01:58:46.134705 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:46.134727 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:46.134766 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:46.198135 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:46.198171 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:46.213155 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:46.213180 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:46.287421 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:46.277793   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.278621   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.280606   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.281406   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:46.283123   11439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:46.287443 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:46.287456 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:46.313370 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:46.313405 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:48.849489 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:48.861044 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:48.861117 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:48.886203 1136586 cri.go:89] found id: ""
	I1208 01:58:48.886227 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.886237 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:48.886243 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:48.886305 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:48.911152 1136586 cri.go:89] found id: ""
	I1208 01:58:48.911177 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.911187 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:48.911193 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:48.911275 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:48.935595 1136586 cri.go:89] found id: ""
	I1208 01:58:48.935620 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.935629 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:48.935635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:48.935750 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:48.959533 1136586 cri.go:89] found id: ""
	I1208 01:58:48.959558 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.959566 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:48.959573 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:48.959631 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:48.985031 1136586 cri.go:89] found id: ""
	I1208 01:58:48.985057 1136586 logs.go:282] 0 containers: []
	W1208 01:58:48.985066 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:48.985073 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:48.985176 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:49.014577 1136586 cri.go:89] found id: ""
	I1208 01:58:49.014603 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.014612 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:49.014619 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:49.014679 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:49.038952 1136586 cri.go:89] found id: ""
	I1208 01:58:49.038978 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.038987 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:49.038993 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:49.039051 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:49.063733 1136586 cri.go:89] found id: ""
	I1208 01:58:49.063759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:49.063768 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:49.063777 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:49.063788 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:49.097818 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:49.097852 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:49.161476 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:49.161513 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:49.178959 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:49.178995 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:49.243404 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:49.234311   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.235209   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.236837   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.237144   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:49.238903   11561 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:49.243465 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:49.243502 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:51.768803 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:51.780779 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:51.780851 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:51.808733 1136586 cri.go:89] found id: ""
	I1208 01:58:51.808759 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.808768 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:51.808775 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:51.808846 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:51.835560 1136586 cri.go:89] found id: ""
	I1208 01:58:51.835587 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.835599 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:51.835606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:51.835670 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:51.860461 1136586 cri.go:89] found id: ""
	I1208 01:58:51.860485 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.860494 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:51.860501 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:51.860562 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:51.885253 1136586 cri.go:89] found id: ""
	I1208 01:58:51.885286 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.885294 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:51.885303 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:51.885373 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:51.909393 1136586 cri.go:89] found id: ""
	I1208 01:58:51.909420 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.909429 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:51.909436 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:51.909498 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:51.934211 1136586 cri.go:89] found id: ""
	I1208 01:58:51.934245 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.934254 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:51.934261 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:51.934331 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:51.958861 1136586 cri.go:89] found id: ""
	I1208 01:58:51.958887 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.958896 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:51.958903 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:51.958961 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:51.984069 1136586 cri.go:89] found id: ""
	I1208 01:58:51.984095 1136586 logs.go:282] 0 containers: []
	W1208 01:58:51.984106 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:51.984115 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:51.984146 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:51.999081 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:51.999109 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:52.068304 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:52.058511   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.059303   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.060796   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.061189   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:52.064332   11654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:52.068327 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:52.068341 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:52.094374 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:52.094481 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:52.127916 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:52.127993 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:54.695208 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:54.706109 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:54.706218 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:54.731787 1136586 cri.go:89] found id: ""
	I1208 01:58:54.731814 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.731823 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:54.731835 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:54.731895 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:54.760606 1136586 cri.go:89] found id: ""
	I1208 01:58:54.760631 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.760639 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:54.760646 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:54.760706 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:54.786598 1136586 cri.go:89] found id: ""
	I1208 01:58:54.786626 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.786635 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:54.786641 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:54.786699 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:54.816536 1136586 cri.go:89] found id: ""
	I1208 01:58:54.816562 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.816572 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:54.816579 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:54.816641 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:54.845022 1136586 cri.go:89] found id: ""
	I1208 01:58:54.845048 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.845056 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:54.845063 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:54.845125 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:54.870700 1136586 cri.go:89] found id: ""
	I1208 01:58:54.870725 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.870734 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:54.870741 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:54.870799 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:54.899897 1136586 cri.go:89] found id: ""
	I1208 01:58:54.899923 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.899934 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:54.899941 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:54.900002 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:54.928551 1136586 cri.go:89] found id: ""
	I1208 01:58:54.928575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:54.928584 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:54.928593 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:54.928606 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:54.991743 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:54.983908   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.984292   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.985845   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.986390   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:54.988020   11762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:54.991769 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:54.991782 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:55.022605 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:55.022696 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:55.052018 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:55.052044 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:58:55.112862 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:55.112979 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.628955 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:58:57.639865 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:58:57.639964 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:58:57.667931 1136586 cri.go:89] found id: ""
	I1208 01:58:57.667954 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.667962 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:58:57.667969 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:58:57.668039 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:58:57.696303 1136586 cri.go:89] found id: ""
	I1208 01:58:57.696328 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.696337 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:58:57.696343 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:58:57.696402 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:58:57.720015 1136586 cri.go:89] found id: ""
	I1208 01:58:57.720043 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.720052 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:58:57.720059 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:58:57.720120 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:58:57.748838 1136586 cri.go:89] found id: ""
	I1208 01:58:57.748910 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.748934 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:58:57.748953 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:58:57.749033 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:58:57.776554 1136586 cri.go:89] found id: ""
	I1208 01:58:57.776575 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.776584 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:58:57.776591 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:58:57.776648 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:58:57.800791 1136586 cri.go:89] found id: ""
	I1208 01:58:57.800815 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.800823 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:58:57.800830 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:58:57.800904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:58:57.825904 1136586 cri.go:89] found id: ""
	I1208 01:58:57.825975 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.825998 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:58:57.826021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:58:57.826157 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:58:57.853294 1136586 cri.go:89] found id: ""
	I1208 01:58:57.853318 1136586 logs.go:282] 0 containers: []
	W1208 01:58:57.853327 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:58:57.853336 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:58:57.853348 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:58:57.868267 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:58:57.868292 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:58:57.934230 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:58:57.926181   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.927055   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928535   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.928896   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:58:57.930384   11877 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:58:57.934259 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:58:57.934274 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:58:57.960735 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:58:57.960767 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:58:57.989741 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:58:57.989770 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.546140 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:00.557379 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:00.557497 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:00.583568 1136586 cri.go:89] found id: ""
	I1208 01:59:00.583595 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.583605 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:00.583611 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:00.583695 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:00.615812 1136586 cri.go:89] found id: ""
	I1208 01:59:00.615838 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.615847 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:00.615856 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:00.615924 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:00.642865 1136586 cri.go:89] found id: ""
	I1208 01:59:00.642905 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.642914 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:00.642921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:00.642991 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:00.669343 1136586 cri.go:89] found id: ""
	I1208 01:59:00.669418 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.669434 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:00.669441 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:00.669501 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:00.695611 1136586 cri.go:89] found id: ""
	I1208 01:59:00.695688 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.695702 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:00.695709 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:00.695774 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:00.721947 1136586 cri.go:89] found id: ""
	I1208 01:59:00.721974 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.721983 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:00.721989 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:00.722059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:00.747456 1136586 cri.go:89] found id: ""
	I1208 01:59:00.747485 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.747493 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:00.747500 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:00.747567 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:00.774802 1136586 cri.go:89] found id: ""
	I1208 01:59:00.774868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:00.774884 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:00.774894 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:00.774906 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:00.832246 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:00.832282 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:00.847202 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:00.847231 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:00.912820 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:00.904622   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.905481   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.906990   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.907398   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:00.908913   11995 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:00.912843 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:00.912856 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:00.938649 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:00.938689 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:03.468247 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:03.479180 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:03.479248 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:03.503843 1136586 cri.go:89] found id: ""
	I1208 01:59:03.503868 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.503877 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:03.503884 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:03.503946 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:03.533070 1136586 cri.go:89] found id: ""
	I1208 01:59:03.533092 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.533101 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:03.533107 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:03.533173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:03.560639 1136586 cri.go:89] found id: ""
	I1208 01:59:03.560662 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.560670 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:03.560677 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:03.560738 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:03.589123 1136586 cri.go:89] found id: ""
	I1208 01:59:03.589150 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.589159 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:03.589165 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:03.589225 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:03.620870 1136586 cri.go:89] found id: ""
	I1208 01:59:03.620893 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.620902 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:03.620908 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:03.620966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:03.648582 1136586 cri.go:89] found id: ""
	I1208 01:59:03.648607 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.648616 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:03.648623 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:03.648688 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:03.676092 1136586 cri.go:89] found id: ""
	I1208 01:59:03.676117 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.676125 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:03.676131 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:03.676193 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:03.704985 1136586 cri.go:89] found id: ""
	I1208 01:59:03.705012 1136586 logs.go:282] 0 containers: []
	W1208 01:59:03.705021 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:03.705031 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:03.705048 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:03.762437 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:03.762476 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:03.777354 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:03.777423 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:03.852604 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:03.843875   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.844783   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.846638   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.847008   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:03.848565   12110 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:03.852630 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:03.852644 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:03.877929 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:03.877964 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.407680 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:06.418391 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:06.418489 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:06.448290 1136586 cri.go:89] found id: ""
	I1208 01:59:06.448312 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.448321 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:06.448327 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:06.448386 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:06.473926 1136586 cri.go:89] found id: ""
	I1208 01:59:06.473958 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.473967 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:06.473974 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:06.474037 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:06.499614 1136586 cri.go:89] found id: ""
	I1208 01:59:06.499640 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.499649 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:06.499656 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:06.499717 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:06.526871 1136586 cri.go:89] found id: ""
	I1208 01:59:06.526895 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.526904 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:06.526910 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:06.526970 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:06.551675 1136586 cri.go:89] found id: ""
	I1208 01:59:06.551706 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.551716 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:06.551722 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:06.551797 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:06.576680 1136586 cri.go:89] found id: ""
	I1208 01:59:06.576705 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.576714 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:06.576724 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:06.576784 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:06.613884 1136586 cri.go:89] found id: ""
	I1208 01:59:06.613921 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.613930 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:06.613939 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:06.614010 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:06.642583 1136586 cri.go:89] found id: ""
	I1208 01:59:06.642619 1136586 logs.go:282] 0 containers: []
	W1208 01:59:06.642629 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:06.642638 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:06.642650 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:06.709864 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:06.701412   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.701971   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.703666   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.704330   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:06.706029   12221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:06.709936 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:06.709962 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:06.739423 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:06.739463 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:06.767654 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:06.767684 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:06.826250 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:06.826285 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.342623 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:09.355321 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:09.355406 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:09.392040 1136586 cri.go:89] found id: ""
	I1208 01:59:09.392067 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.392080 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:09.392091 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:09.392161 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:09.420346 1136586 cri.go:89] found id: ""
	I1208 01:59:09.420372 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.420381 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:09.420387 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:09.420454 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:09.446119 1136586 cri.go:89] found id: ""
	I1208 01:59:09.446145 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.446154 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:09.446161 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:09.446224 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:09.470836 1136586 cri.go:89] found id: ""
	I1208 01:59:09.470859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.470867 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:09.470873 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:09.470930 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:09.495896 1136586 cri.go:89] found id: ""
	I1208 01:59:09.495964 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.495988 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:09.496000 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:09.496076 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:09.521109 1136586 cri.go:89] found id: ""
	I1208 01:59:09.521136 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.521145 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:09.521151 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:09.521211 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:09.551629 1136586 cri.go:89] found id: ""
	I1208 01:59:09.551652 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.551668 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:09.551676 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:09.551740 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:09.577446 1136586 cri.go:89] found id: ""
	I1208 01:59:09.577472 1136586 logs.go:282] 0 containers: []
	W1208 01:59:09.577481 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:09.577490 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:09.577500 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:09.641466 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:09.641501 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:09.657574 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:09.657600 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:09.724794 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:09.716983   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.717413   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.718926   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.719242   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:09.720846   12342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:09.724818 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:09.724830 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:09.749729 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:09.749761 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:12.285155 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:12.296049 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:12.296118 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:12.325857 1136586 cri.go:89] found id: ""
	I1208 01:59:12.325891 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.325900 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:12.325907 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:12.325992 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:12.363392 1136586 cri.go:89] found id: ""
	I1208 01:59:12.363419 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.363428 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:12.363434 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:12.363499 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:12.392776 1136586 cri.go:89] found id: ""
	I1208 01:59:12.392803 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.392812 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:12.392817 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:12.392884 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:12.418895 1136586 cri.go:89] found id: ""
	I1208 01:59:12.418919 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.418928 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:12.418935 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:12.418994 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:12.444923 1136586 cri.go:89] found id: ""
	I1208 01:59:12.444947 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.444960 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:12.444966 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:12.445087 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:12.471912 1136586 cri.go:89] found id: ""
	I1208 01:59:12.471982 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.472006 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:12.472019 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:12.472093 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:12.496844 1136586 cri.go:89] found id: ""
	I1208 01:59:12.496877 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.496886 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:12.496892 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:12.496966 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:12.523601 1136586 cri.go:89] found id: ""
	I1208 01:59:12.523626 1136586 logs.go:282] 0 containers: []
	W1208 01:59:12.523635 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:12.523645 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:12.523656 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:12.581608 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:12.581646 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:12.598560 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:12.598638 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:12.666409 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:12.657320   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.658356   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.659120   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660581   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:12.660908   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:12.666430 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:12.666474 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:12.692286 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:12.692321 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.220645 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:15.234496 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:15.234563 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:15.259957 1136586 cri.go:89] found id: ""
	I1208 01:59:15.259981 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.259991 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:15.259997 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:15.260059 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:15.285880 1136586 cri.go:89] found id: ""
	I1208 01:59:15.285906 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.285915 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:15.285921 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:15.285982 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:15.311506 1136586 cri.go:89] found id: ""
	I1208 01:59:15.311533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.311545 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:15.311552 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:15.311615 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:15.336490 1136586 cri.go:89] found id: ""
	I1208 01:59:15.336515 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.336524 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:15.336531 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:15.336590 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:15.365039 1136586 cri.go:89] found id: ""
	I1208 01:59:15.365064 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.365073 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:15.365079 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:15.365143 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:15.399712 1136586 cri.go:89] found id: ""
	I1208 01:59:15.399740 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.399749 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:15.399756 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:15.399821 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:15.427492 1136586 cri.go:89] found id: ""
	I1208 01:59:15.427517 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.427527 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:15.427533 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:15.427599 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:15.453022 1136586 cri.go:89] found id: ""
	I1208 01:59:15.453050 1136586 logs.go:282] 0 containers: []
	W1208 01:59:15.453059 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:15.453068 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:15.453081 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:15.468204 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:15.468283 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:15.533761 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:15.525297   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.525841   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.527416   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.528754   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:15.529318   12567 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:15.533785 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:15.533801 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:15.558879 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:15.558914 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:15.593769 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:15.593794 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.158848 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:18.169444 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:18.169517 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:18.195546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.195572 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.195581 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:18.195587 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:18.195649 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:18.220906 1136586 cri.go:89] found id: ""
	I1208 01:59:18.220928 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.220942 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:18.220948 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:18.221008 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:18.248546 1136586 cri.go:89] found id: ""
	I1208 01:59:18.248574 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.248584 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:18.248590 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:18.248652 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:18.273450 1136586 cri.go:89] found id: ""
	I1208 01:59:18.273477 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.273486 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:18.273492 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:18.273558 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:18.298830 1136586 cri.go:89] found id: ""
	I1208 01:59:18.298857 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.298867 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:18.298874 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:18.298936 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:18.328161 1136586 cri.go:89] found id: ""
	I1208 01:59:18.328182 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.328191 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:18.328198 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:18.328258 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:18.369715 1136586 cri.go:89] found id: ""
	I1208 01:59:18.369747 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.369756 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:18.369763 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:18.369822 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:18.400838 1136586 cri.go:89] found id: ""
	I1208 01:59:18.400865 1136586 logs.go:282] 0 containers: []
	W1208 01:59:18.400874 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:18.400883 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:18.400913 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:18.429677 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:18.429711 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:18.462210 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:18.462239 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:18.517535 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:18.517571 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:18.533236 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:18.533267 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:18.604338 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:18.591883   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.593321   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.594794   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.596094   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:18.597033   12693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.106017 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:21.116977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:21.117060 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:21.145425 1136586 cri.go:89] found id: ""
	I1208 01:59:21.145503 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.145526 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:21.145544 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:21.145633 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:21.169097 1136586 cri.go:89] found id: ""
	I1208 01:59:21.169125 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.169134 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:21.169140 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:21.169205 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:21.195045 1136586 cri.go:89] found id: ""
	I1208 01:59:21.195071 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.195081 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:21.195088 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:21.195153 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:21.221094 1136586 cri.go:89] found id: ""
	I1208 01:59:21.221128 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.221137 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:21.221144 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:21.221213 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:21.247434 1136586 cri.go:89] found id: ""
	I1208 01:59:21.247457 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.247466 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:21.247472 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:21.247531 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:21.278610 1136586 cri.go:89] found id: ""
	I1208 01:59:21.278633 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.278642 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:21.278648 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:21.278712 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:21.304567 1136586 cri.go:89] found id: ""
	I1208 01:59:21.304638 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.304654 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:21.304662 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:21.304731 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:21.331211 1136586 cri.go:89] found id: ""
	I1208 01:59:21.331281 1136586 logs.go:282] 0 containers: []
	W1208 01:59:21.331304 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:21.331324 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:21.331355 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:21.392474 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:21.392509 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:21.413166 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:21.413192 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:21.491167 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:21.478340   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.482949   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.483824   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.485685   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:21.486126   12791 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:21.491190 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:21.491204 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:21.516454 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:21.516487 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:24.050552 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:24.061833 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:24.061907 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:24.089336 1136586 cri.go:89] found id: ""
	I1208 01:59:24.089363 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.089372 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:24.089380 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:24.089442 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:24.115231 1136586 cri.go:89] found id: ""
	I1208 01:59:24.115256 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.115265 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:24.115272 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:24.115347 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:24.141479 1136586 cri.go:89] found id: ""
	I1208 01:59:24.141505 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.141515 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:24.141522 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:24.141580 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:24.166759 1136586 cri.go:89] found id: ""
	I1208 01:59:24.166786 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.166795 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:24.166802 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:24.166862 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:24.191431 1136586 cri.go:89] found id: ""
	I1208 01:59:24.191453 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.191462 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:24.191468 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:24.191525 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:24.216578 1136586 cri.go:89] found id: ""
	I1208 01:59:24.216618 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.216628 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:24.216635 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:24.216708 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:24.242316 1136586 cri.go:89] found id: ""
	I1208 01:59:24.242343 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.242352 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:24.242358 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:24.242420 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:24.267328 1136586 cri.go:89] found id: ""
	I1208 01:59:24.267355 1136586 logs.go:282] 0 containers: []
	W1208 01:59:24.267365 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:24.267375 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:24.267386 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:24.322866 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:24.322901 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:24.337393 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:24.337420 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:24.422627 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:24.414753   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.415144   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.416841   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.417151   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:24.418788   12905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:24.422649 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:24.422662 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:24.447517 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:24.447551 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:26.974915 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:26.985831 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:26.985904 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:27.015934 1136586 cri.go:89] found id: ""
	I1208 01:59:27.015960 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.015970 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:27.015977 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:27.016043 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:27.042350 1136586 cri.go:89] found id: ""
	I1208 01:59:27.042376 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.042386 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:27.042400 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:27.042482 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:27.068981 1136586 cri.go:89] found id: ""
	I1208 01:59:27.069007 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.069015 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:27.069021 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:27.069086 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:27.097058 1136586 cri.go:89] found id: ""
	I1208 01:59:27.097086 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.097095 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:27.097105 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:27.097168 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:27.127221 1136586 cri.go:89] found id: ""
	I1208 01:59:27.127245 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.127253 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:27.127260 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:27.127318 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:27.152834 1136586 cri.go:89] found id: ""
	I1208 01:59:27.152859 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.152869 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:27.152875 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:27.152942 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:27.185563 1136586 cri.go:89] found id: ""
	I1208 01:59:27.185591 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.185600 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:27.185606 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:27.185667 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:27.213022 1136586 cri.go:89] found id: ""
	I1208 01:59:27.213099 1136586 logs.go:282] 0 containers: []
	W1208 01:59:27.213125 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:27.213147 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:27.213183 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:27.272193 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:27.272229 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:27.289811 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:27.289892 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:27.364663 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:27.356564   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.357333   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.358984   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.359336   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:27.360623   13015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:27.364695 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:27.364720 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:27.392211 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:27.392286 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:29.931677 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:29.942629 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:29.942709 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:29.971856 1136586 cri.go:89] found id: ""
	I1208 01:59:29.971882 1136586 logs.go:282] 0 containers: []
	W1208 01:59:29.971891 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:29.971898 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:29.971958 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:30.000222 1136586 cri.go:89] found id: ""
	I1208 01:59:30.000248 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.000258 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:30.000265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:30.000330 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:30.039259 1136586 cri.go:89] found id: ""
	I1208 01:59:30.039285 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.039295 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:30.039301 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:30.039370 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:30.096203 1136586 cri.go:89] found id: ""
	I1208 01:59:30.096247 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.096258 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:30.096265 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:30.096348 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:30.125007 1136586 cri.go:89] found id: ""
	I1208 01:59:30.125034 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.125044 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:30.125051 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:30.125138 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:30.155888 1136586 cri.go:89] found id: ""
	I1208 01:59:30.155914 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.155924 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:30.155931 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:30.155996 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:30.183068 1136586 cri.go:89] found id: ""
	I1208 01:59:30.183104 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.183114 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:30.183121 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:30.183186 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:30.211552 1136586 cri.go:89] found id: ""
	I1208 01:59:30.211577 1136586 logs.go:282] 0 containers: []
	W1208 01:59:30.211585 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:30.211601 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:30.211613 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:30.238738 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:30.238789 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:30.272245 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:30.272275 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:30.331871 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:30.331909 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:30.349711 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:30.349742 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:30.428964 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:30.420857   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.421457   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423053   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.423584   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:30.425100   13146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:32.929192 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:32.940100 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1208 01:59:32.940183 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1208 01:59:32.963581 1136586 cri.go:89] found id: ""
	I1208 01:59:32.963602 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.963611 1136586 logs.go:284] No container was found matching "kube-apiserver"
	I1208 01:59:32.963617 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1208 01:59:32.963678 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1208 01:59:32.992028 1136586 cri.go:89] found id: ""
	I1208 01:59:32.992054 1136586 logs.go:282] 0 containers: []
	W1208 01:59:32.992063 1136586 logs.go:284] No container was found matching "etcd"
	I1208 01:59:32.992069 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1208 01:59:32.992130 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1208 01:59:33.023809 1136586 cri.go:89] found id: ""
	I1208 01:59:33.023836 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.023846 1136586 logs.go:284] No container was found matching "coredns"
	I1208 01:59:33.023852 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1208 01:59:33.023919 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1208 01:59:33.048510 1136586 cri.go:89] found id: ""
	I1208 01:59:33.048533 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.048541 1136586 logs.go:284] No container was found matching "kube-scheduler"
	I1208 01:59:33.048548 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1208 01:59:33.048608 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1208 01:59:33.075068 1136586 cri.go:89] found id: ""
	I1208 01:59:33.075096 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.075106 1136586 logs.go:284] No container was found matching "kube-proxy"
	I1208 01:59:33.075113 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1208 01:59:33.075173 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1208 01:59:33.099238 1136586 cri.go:89] found id: ""
	I1208 01:59:33.099264 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.099273 1136586 logs.go:284] No container was found matching "kube-controller-manager"
	I1208 01:59:33.099280 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1208 01:59:33.099345 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1208 01:59:33.123805 1136586 cri.go:89] found id: ""
	I1208 01:59:33.123831 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.123840 1136586 logs.go:284] No container was found matching "kindnet"
	I1208 01:59:33.123846 1136586 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1208 01:59:33.123905 1136586 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1208 01:59:33.152142 1136586 cri.go:89] found id: ""
	I1208 01:59:33.152166 1136586 logs.go:282] 0 containers: []
	W1208 01:59:33.152175 1136586 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1208 01:59:33.152184 1136586 logs.go:123] Gathering logs for kubelet ...
	I1208 01:59:33.152195 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1208 01:59:33.210457 1136586 logs.go:123] Gathering logs for dmesg ...
	I1208 01:59:33.210492 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1208 01:59:33.225387 1136586 logs.go:123] Gathering logs for describe nodes ...
	I1208 01:59:33.225415 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1208 01:59:33.288797 1136586 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1208 01:59:33.280573   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.281422   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283015   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.283326   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:33.284841   13241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1208 01:59:33.288820 1136586 logs.go:123] Gathering logs for containerd ...
	I1208 01:59:33.288834 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1208 01:59:33.314642 1136586 logs.go:123] Gathering logs for container status ...
	I1208 01:59:33.314675 1136586 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1208 01:59:35.847043 1136586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:59:35.865523 1136586 out.go:203] 
	W1208 01:59:35.868530 1136586 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1208 01:59:35.868757 1136586 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1208 01:59:35.868776 1136586 out.go:285] * Related issues:
	W1208 01:59:35.868792 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1208 01:59:35.868833 1136586 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1208 01:59:35.873508 1136586 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786139868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786216570Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786327677Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786398012Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786492035Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786558530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786618051Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786677259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786750130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.786832108Z" level=info msg="Connect containerd service"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.787154187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.787806804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801520989Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801594475Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801680802Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.801735276Z" level=info msg="Start recovering state"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842441332Z" level=info msg="Start event monitor"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842660328Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842725527Z" level=info msg="Start streaming server"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842808506Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842872334Z" level=info msg="runtime interface starting up..."
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.842928778Z" level=info msg="starting plugins..."
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.843007934Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:53:32 newest-cni-457779 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:53:32 newest-cni-457779 containerd[556]: time="2025-12-08T01:53:32.845003152Z" level=info msg="containerd successfully booted in 0.084434s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 01:59:49.169165   13912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:49.170044   13912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:49.171625   13912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:49.172173   13912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 01:59:49.173771   13912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 01:59:49 up  6:42,  0 user,  load average: 1.23, 0.89, 1.24
	Linux newest-cni-457779 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 01:59:45 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:46 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 08 01:59:46 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:46 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:46 newest-cni-457779 kubelet[13775]: E1208 01:59:46.677901   13775 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:46 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:46 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:47 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 08 01:59:47 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:47 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:47 newest-cni-457779 kubelet[13796]: E1208 01:59:47.419129   13796 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:47 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:47 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:48 newest-cni-457779 kubelet[13817]: E1208 01:59:48.167651   13817 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 01:59:48 newest-cni-457779 kubelet[13843]: E1208 01:59:48.909938   13843 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 01:59:48 newest-cni-457779 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-457779 -n newest-cni-457779: exit status 2 (381.983113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-457779" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (272.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:03:22.379001  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:04:25.030836  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:04:30.127570  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:11.304455  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:11.771160  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:11.777555  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:11.789041  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:11.810427  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:11.852039  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:11.933490  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:12.095028  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:12.417361  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:06:13.059643  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:14.341734  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:16.903244  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
I1208 02:06:17.276207  846711 config.go:182] Loaded profile config "custom-flannel-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:22.026581  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:28.221475  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:32.268434  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:52.750753  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1208 02:06:59.314572  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/default-k8s-diff-port-843696/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 2 (460.668152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-536520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-536520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.748µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-536520 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-536520
helpers_test.go:243: (dbg) docker inspect no-preload-536520:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	        "Created": "2025-12-08T01:37:08.21933548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1128684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-08T01:47:25.421292194Z",
	            "FinishedAt": "2025-12-08T01:47:24.077520836Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hostname",
	        "HostsPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/hosts",
	        "LogPath": "/var/lib/docker/containers/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327/655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327-json.log",
	        "Name": "/no-preload-536520",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-536520:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-536520",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "655489d4253e5519928a876f9e0b24bf54a2416b9a07219c51500d18f0c08327",
	                "LowerDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a03a2295366568ea15c9dc614a3988be0f1ea3c0a7089e9e00689b582bdf60/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-536520",
	                "Source": "/var/lib/docker/volumes/no-preload-536520/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-536520",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-536520",
	                "name.minikube.sigs.k8s.io": "no-preload-536520",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "508635803fd26385f5b74c49f258f541cf3f3701572a3e277063698fd55748b0",
	            "SandboxKey": "/var/run/docker/netns/508635803fd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-536520": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b7:e8:6e:2b:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3d059a73d01e7ee83e4114703103fa1d47dd746e9e1765e1413d62afbc65aa5c",
	                    "EndpointID": "662425aa0da883d43861485458a7d96ef656064827e7d2e8fc052d0ab70deda4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-536520",
	                        "655489d4253e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 2 (465.777844ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-536520 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                 ARGS                                                                                  │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-475514 sudo systemctl status kubelet --all --full --no-pager                                                                                        │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl cat kubelet --no-pager                                                                                                        │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                         │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cat /etc/kubernetes/kubelet.conf                                                                                                        │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cat /var/lib/kubelet/config.yaml                                                                                                        │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl status docker --all --full --no-pager                                                                                         │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl cat docker --no-pager                                                                                                         │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cat /etc/docker/daemon.json                                                                                                             │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	│ ssh     │ -p custom-flannel-475514 sudo docker system info                                                                                                                      │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl status cri-docker --all --full --no-pager                                                                                     │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl cat cri-docker --no-pager                                                                                                     │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	│ ssh     │ -p custom-flannel-475514 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                          │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cri-dockerd --version                                                                                                                   │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl status containerd --all --full --no-pager                                                                                     │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl cat containerd --no-pager                                                                                                     │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cat /lib/systemd/system/containerd.service                                                                                              │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo cat /etc/containerd/config.toml                                                                                                         │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo containerd config dump                                                                                                                  │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl status crio --all --full --no-pager                                                                                           │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	│ ssh     │ -p custom-flannel-475514 sudo systemctl cat crio --no-pager                                                                                                           │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                 │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ ssh     │ -p custom-flannel-475514 sudo crio config                                                                                                                             │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ delete  │ -p custom-flannel-475514                                                                                                                                              │ custom-flannel-475514     │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │ 08 Dec 25 02:06 UTC │
	│ start   │ -p enable-default-cni-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd │ enable-default-cni-475514 │ jenkins │ v1.37.0 │ 08 Dec 25 02:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 02:06:47
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 02:06:47.912921 1183946 out.go:360] Setting OutFile to fd 1 ...
	I1208 02:06:47.913138 1183946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:06:47.913172 1183946 out.go:374] Setting ErrFile to fd 2...
	I1208 02:06:47.913199 1183946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 02:06:47.913509 1183946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 02:06:47.915035 1183946 out.go:368] Setting JSON to false
	I1208 02:06:47.915897 1183946 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":24561,"bootTime":1765135047,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 02:06:47.915964 1183946 start.go:143] virtualization:  
	I1208 02:06:47.920518 1183946 out.go:179] * [enable-default-cni-475514] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 02:06:47.925191 1183946 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 02:06:47.925263 1183946 notify.go:221] Checking for updates...
	I1208 02:06:47.932034 1183946 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 02:06:47.935291 1183946 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 02:06:47.938615 1183946 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 02:06:47.941876 1183946 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 02:06:47.945053 1183946 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 02:06:47.948648 1183946 config.go:182] Loaded profile config "no-preload-536520": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 02:06:47.948756 1183946 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 02:06:47.988872 1183946 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 02:06:47.989022 1183946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:06:48.054927 1183946 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:06:48.044978542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:06:48.055046 1183946 docker.go:319] overlay module found
	I1208 02:06:48.060479 1183946 out.go:179] * Using the docker driver based on user configuration
	I1208 02:06:48.063355 1183946 start.go:309] selected driver: docker
	I1208 02:06:48.063373 1183946 start.go:927] validating driver "docker" against <nil>
	I1208 02:06:48.063389 1183946 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 02:06:48.064152 1183946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 02:06:48.119588 1183946 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 02:06:48.110211768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 02:06:48.119751 1183946 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	E1208 02:06:48.119975 1183946 start_flags.go:481] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1208 02:06:48.120002 1183946 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 02:06:48.123067 1183946 out.go:179] * Using Docker driver with root privileges
	I1208 02:06:48.126086 1183946 cni.go:84] Creating CNI manager for "bridge"
	I1208 02:06:48.126113 1183946 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 02:06:48.126220 1183946 start.go:353] cluster config:
	{Name:enable-default-cni-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-475514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:06:48.129442 1183946 out.go:179] * Starting "enable-default-cni-475514" primary control-plane node in "enable-default-cni-475514" cluster
	I1208 02:06:48.132451 1183946 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 02:06:48.135358 1183946 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1208 02:06:48.138204 1183946 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 02:06:48.138246 1183946 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1208 02:06:48.138274 1183946 cache.go:65] Caching tarball of preloaded images
	I1208 02:06:48.138314 1183946 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 02:06:48.138361 1183946 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1208 02:06:48.138382 1183946 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1208 02:06:48.138523 1183946 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/config.json ...
	I1208 02:06:48.138544 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/config.json: {Name:mk51953fa0ea25037ba24032cde6708d2ac3f72f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:06:48.165554 1183946 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 02:06:48.165580 1183946 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1208 02:06:48.165596 1183946 cache.go:243] Successfully downloaded all kic artifacts
	I1208 02:06:48.165628 1183946 start.go:360] acquireMachinesLock for enable-default-cni-475514: {Name:mk3aad74d39c52bcaa944d901ac1a2e08e2ba51b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 02:06:48.165738 1183946 start.go:364] duration metric: took 87.64µs to acquireMachinesLock for "enable-default-cni-475514"
	I1208 02:06:48.165770 1183946 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-475514 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1208 02:06:48.165847 1183946 start.go:125] createHost starting for "" (driver="docker")
	I1208 02:06:48.169179 1183946 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1208 02:06:48.169430 1183946 start.go:159] libmachine.API.Create for "enable-default-cni-475514" (driver="docker")
	I1208 02:06:48.169468 1183946 client.go:173] LocalClient.Create starting
	I1208 02:06:48.169554 1183946 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
	I1208 02:06:48.169595 1183946 main.go:143] libmachine: Decoding PEM data...
	I1208 02:06:48.169613 1183946 main.go:143] libmachine: Parsing certificate...
	I1208 02:06:48.169681 1183946 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
	I1208 02:06:48.169706 1183946 main.go:143] libmachine: Decoding PEM data...
	I1208 02:06:48.169718 1183946 main.go:143] libmachine: Parsing certificate...
	I1208 02:06:48.170079 1183946 cli_runner.go:164] Run: docker network inspect enable-default-cni-475514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1208 02:06:48.186474 1183946 cli_runner.go:211] docker network inspect enable-default-cni-475514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1208 02:06:48.186558 1183946 network_create.go:284] running [docker network inspect enable-default-cni-475514] to gather additional debugging logs...
	I1208 02:06:48.186592 1183946 cli_runner.go:164] Run: docker network inspect enable-default-cni-475514
	W1208 02:06:48.203814 1183946 cli_runner.go:211] docker network inspect enable-default-cni-475514 returned with exit code 1
	I1208 02:06:48.203844 1183946 network_create.go:287] error running [docker network inspect enable-default-cni-475514]: docker network inspect enable-default-cni-475514: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network enable-default-cni-475514 not found
	I1208 02:06:48.203866 1183946 network_create.go:289] output of [docker network inspect enable-default-cni-475514]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network enable-default-cni-475514 not found
	
	** /stderr **
	I1208 02:06:48.203966 1183946 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:06:48.222382 1183946 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
	I1208 02:06:48.222869 1183946 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-68ab5e77b290 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:48:8c:e0:76:bf} reservation:<nil>}
	I1208 02:06:48.223303 1183946 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6cdeefff8c4a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:57:fe:42:23:11} reservation:<nil>}
	I1208 02:06:48.223806 1183946 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3eee0}
	I1208 02:06:48.223831 1183946 network_create.go:124] attempt to create docker network enable-default-cni-475514 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1208 02:06:48.223893 1183946 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-475514 enable-default-cni-475514
	I1208 02:06:48.286186 1183946 network_create.go:108] docker network enable-default-cni-475514 192.168.76.0/24 created
	I1208 02:06:48.286227 1183946 kic.go:121] calculated static IP "192.168.76.2" for the "enable-default-cni-475514" container
	I1208 02:06:48.286314 1183946 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1208 02:06:48.303922 1183946 cli_runner.go:164] Run: docker volume create enable-default-cni-475514 --label name.minikube.sigs.k8s.io=enable-default-cni-475514 --label created_by.minikube.sigs.k8s.io=true
	I1208 02:06:48.326829 1183946 oci.go:103] Successfully created a docker volume enable-default-cni-475514
	I1208 02:06:48.326941 1183946 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-475514-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-475514 --entrypoint /usr/bin/test -v enable-default-cni-475514:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1208 02:06:48.883779 1183946 oci.go:107] Successfully prepared a docker volume enable-default-cni-475514
	I1208 02:06:48.883842 1183946 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 02:06:48.883857 1183946 kic.go:194] Starting extracting preloaded images to volume ...
	I1208 02:06:48.883938 1183946 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-475514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1208 02:06:52.926754 1183946 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-475514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.042750907s)
	I1208 02:06:52.926787 1183946 kic.go:203] duration metric: took 4.042926909s to extract preloaded images to volume ...
	W1208 02:06:52.926935 1183946 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1208 02:06:52.927046 1183946 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1208 02:06:52.982733 1183946 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-475514 --name enable-default-cni-475514 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-475514 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-475514 --network enable-default-cni-475514 --ip 192.168.76.2 --volume enable-default-cni-475514:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1208 02:06:53.316438 1183946 cli_runner.go:164] Run: docker container inspect enable-default-cni-475514 --format={{.State.Running}}
	I1208 02:06:53.342424 1183946 cli_runner.go:164] Run: docker container inspect enable-default-cni-475514 --format={{.State.Status}}
	I1208 02:06:53.367917 1183946 cli_runner.go:164] Run: docker exec enable-default-cni-475514 stat /var/lib/dpkg/alternatives/iptables
	I1208 02:06:53.418403 1183946 oci.go:144] the created container "enable-default-cni-475514" has a running status.
	I1208 02:06:53.418432 1183946 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa...
	I1208 02:06:53.702556 1183946 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1208 02:06:53.727465 1183946 cli_runner.go:164] Run: docker container inspect enable-default-cni-475514 --format={{.State.Status}}
	I1208 02:06:53.754071 1183946 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1208 02:06:53.754094 1183946 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-475514 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1208 02:06:53.831303 1183946 cli_runner.go:164] Run: docker container inspect enable-default-cni-475514 --format={{.State.Status}}
	I1208 02:06:53.864486 1183946 machine.go:94] provisionDockerMachine start ...
	I1208 02:06:53.864579 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:53.888173 1183946 main.go:143] libmachine: Using SSH client type: native
	I1208 02:06:53.888542 1183946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I1208 02:06:53.888564 1183946 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 02:06:53.889252 1183946 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1208 02:06:57.042391 1183946 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-475514
	
	I1208 02:06:57.042417 1183946 ubuntu.go:182] provisioning hostname "enable-default-cni-475514"
	I1208 02:06:57.042513 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:57.060537 1183946 main.go:143] libmachine: Using SSH client type: native
	I1208 02:06:57.060848 1183946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I1208 02:06:57.060927 1183946 main.go:143] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-475514 && echo "enable-default-cni-475514" | sudo tee /etc/hostname
	I1208 02:06:57.223700 1183946 main.go:143] libmachine: SSH cmd err, output: <nil>: enable-default-cni-475514
	
	I1208 02:06:57.223780 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:57.246069 1183946 main.go:143] libmachine: Using SSH client type: native
	I1208 02:06:57.246390 1183946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33898 <nil> <nil>}
	I1208 02:06:57.246407 1183946 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-475514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-475514/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-475514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 02:06:57.398800 1183946 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 02:06:57.398828 1183946 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
	I1208 02:06:57.398849 1183946 ubuntu.go:190] setting up certificates
	I1208 02:06:57.398858 1183946 provision.go:84] configureAuth start
	I1208 02:06:57.398927 1183946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-475514
	I1208 02:06:57.415691 1183946 provision.go:143] copyHostCerts
	I1208 02:06:57.415767 1183946 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
	I1208 02:06:57.415783 1183946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
	I1208 02:06:57.415865 1183946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
	I1208 02:06:57.415966 1183946 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
	I1208 02:06:57.415978 1183946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
	I1208 02:06:57.416006 1183946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
	I1208 02:06:57.416070 1183946 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
	I1208 02:06:57.416079 1183946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
	I1208 02:06:57.416103 1183946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
	I1208 02:06:57.416164 1183946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-475514 san=[127.0.0.1 192.168.76.2 enable-default-cni-475514 localhost minikube]
	I1208 02:06:57.728666 1183946 provision.go:177] copyRemoteCerts
	I1208 02:06:57.728770 1183946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 02:06:57.728831 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:57.747204 1183946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa Username:docker}
	I1208 02:06:57.854089 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 02:06:57.871583 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 02:06:57.889192 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1208 02:06:57.906504 1183946 provision.go:87] duration metric: took 507.633908ms to configureAuth
	I1208 02:06:57.906589 1183946 ubuntu.go:206] setting minikube options for container-runtime
	I1208 02:06:57.906807 1183946 config.go:182] Loaded profile config "enable-default-cni-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 02:06:57.906820 1183946 machine.go:97] duration metric: took 4.042310404s to provisionDockerMachine
	I1208 02:06:57.906828 1183946 client.go:176] duration metric: took 9.737350895s to LocalClient.Create
	I1208 02:06:57.906851 1183946 start.go:167] duration metric: took 9.737422863s to libmachine.API.Create "enable-default-cni-475514"
	I1208 02:06:57.906864 1183946 start.go:293] postStartSetup for "enable-default-cni-475514" (driver="docker")
	I1208 02:06:57.906874 1183946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 02:06:57.906928 1183946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 02:06:57.906980 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:57.924014 1183946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa Username:docker}
	I1208 02:06:58.031072 1183946 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 02:06:58.034472 1183946 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1208 02:06:58.034500 1183946 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1208 02:06:58.034511 1183946 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
	I1208 02:06:58.034569 1183946 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
	I1208 02:06:58.034651 1183946 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
	I1208 02:06:58.034751 1183946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 02:06:58.042282 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 02:06:58.061828 1183946 start.go:296] duration metric: took 154.949275ms for postStartSetup
	I1208 02:06:58.062211 1183946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-475514
	I1208 02:06:58.079644 1183946 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/config.json ...
	I1208 02:06:58.079938 1183946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 02:06:58.079992 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:58.097779 1183946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa Username:docker}
	I1208 02:06:58.200047 1183946 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1208 02:06:58.204971 1183946 start.go:128] duration metric: took 10.039108511s to createHost
	I1208 02:06:58.205001 1183946 start.go:83] releasing machines lock for "enable-default-cni-475514", held for 10.039247943s
	I1208 02:06:58.205106 1183946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-475514
	I1208 02:06:58.222689 1183946 ssh_runner.go:195] Run: cat /version.json
	I1208 02:06:58.222755 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:58.222760 1183946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 02:06:58.222822 1183946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-475514
	I1208 02:06:58.242686 1183946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa Username:docker}
	I1208 02:06:58.252135 1183946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33898 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/enable-default-cni-475514/id_rsa Username:docker}
	I1208 02:06:58.434661 1183946 ssh_runner.go:195] Run: systemctl --version
	I1208 02:06:58.440897 1183946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 02:06:58.445443 1183946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 02:06:58.445519 1183946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 02:06:58.474170 1183946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1208 02:06:58.474196 1183946 start.go:496] detecting cgroup driver to use...
	I1208 02:06:58.474229 1183946 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1208 02:06:58.474284 1183946 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1208 02:06:58.489881 1183946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1208 02:06:58.503470 1183946 docker.go:218] disabling cri-docker service (if available) ...
	I1208 02:06:58.503559 1183946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 02:06:58.521673 1183946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 02:06:58.540477 1183946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 02:06:58.664589 1183946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 02:06:58.789443 1183946 docker.go:234] disabling docker service ...
	I1208 02:06:58.789550 1183946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 02:06:58.811964 1183946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 02:06:58.825784 1183946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 02:06:58.954846 1183946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 02:06:59.086298 1183946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 02:06:59.107657 1183946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 02:06:59.121830 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1208 02:06:59.131936 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1208 02:06:59.140692 1183946 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1208 02:06:59.140770 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1208 02:06:59.149550 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 02:06:59.158380 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1208 02:06:59.167175 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1208 02:06:59.176795 1183946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 02:06:59.185219 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1208 02:06:59.194162 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1208 02:06:59.203326 1183946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1208 02:06:59.213054 1183946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 02:06:59.220666 1183946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 02:06:59.228194 1183946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:06:59.339669 1183946 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1208 02:06:59.459232 1183946 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1208 02:06:59.459357 1183946 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1208 02:06:59.463277 1183946 start.go:564] Will wait 60s for crictl version
	I1208 02:06:59.463387 1183946 ssh_runner.go:195] Run: which crictl
	I1208 02:06:59.466985 1183946 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1208 02:06:59.495552 1183946 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1208 02:06:59.495669 1183946 ssh_runner.go:195] Run: containerd --version
	I1208 02:06:59.519069 1183946 ssh_runner.go:195] Run: containerd --version
	I1208 02:06:59.545580 1183946 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.2.0 ...
	I1208 02:06:59.548522 1183946 cli_runner.go:164] Run: docker network inspect enable-default-cni-475514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1208 02:06:59.565124 1183946 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1208 02:06:59.568847 1183946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:06:59.578632 1183946 kubeadm.go:884] updating cluster {Name:enable-default-cni-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-475514 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 02:06:59.578755 1183946 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 02:06:59.578828 1183946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:06:59.603113 1183946 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 02:06:59.603142 1183946 containerd.go:534] Images already preloaded, skipping extraction
	I1208 02:06:59.603201 1183946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 02:06:59.630311 1183946 containerd.go:627] all images are preloaded for containerd runtime.
	I1208 02:06:59.630336 1183946 cache_images.go:86] Images are preloaded, skipping loading
	I1208 02:06:59.630344 1183946 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 containerd true true} ...
	I1208 02:06:59.630432 1183946 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-475514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-475514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1208 02:06:59.630523 1183946 ssh_runner.go:195] Run: sudo crictl info
	I1208 02:06:59.654765 1183946 cni.go:84] Creating CNI manager for "bridge"
	I1208 02:06:59.654810 1183946 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 02:06:59.654833 1183946 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-475514 NodeName:enable-default-cni-475514 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 02:06:59.654966 1183946 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "enable-default-cni-475514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 02:06:59.655044 1183946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 02:06:59.662939 1183946 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 02:06:59.663007 1183946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 02:06:59.671801 1183946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I1208 02:06:59.684653 1183946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 02:06:59.698019 1183946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2238 bytes)
	I1208 02:06:59.711597 1183946 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1208 02:06:59.715430 1183946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 02:06:59.725453 1183946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 02:06:59.838527 1183946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 02:06:59.857435 1183946 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514 for IP: 192.168.76.2
	I1208 02:06:59.857458 1183946 certs.go:195] generating shared ca certs ...
	I1208 02:06:59.857474 1183946 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:06:59.857625 1183946 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
	I1208 02:06:59.857674 1183946 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
	I1208 02:06:59.857686 1183946 certs.go:257] generating profile certs ...
	I1208 02:06:59.857743 1183946 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/client.key
	I1208 02:06:59.857759 1183946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/client.crt with IP's: []
	I1208 02:07:00.365234 1183946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/client.crt ...
	I1208 02:07:00.366901 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/client.crt: {Name:mk0101caea72db5a588dab594f97f2ad5e06b03d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:07:00.367235 1183946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/client.key ...
	I1208 02:07:00.397863 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/client.key: {Name:mkb3613c262d5e7e2a604bc9e70327469a3b6316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:07:00.398120 1183946 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.key.5787e387
	I1208 02:07:00.398168 1183946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.crt.5787e387 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1208 02:07:00.848314 1183946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.crt.5787e387 ...
	I1208 02:07:00.848358 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.crt.5787e387: {Name:mk2c466c5ef8398e9d4b888cab69dde3e47ca0ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:07:00.848598 1183946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.key.5787e387 ...
	I1208 02:07:00.848618 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.key.5787e387: {Name:mkc4e7d6a282a417f0b4224ce647c4895b2461d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:07:00.848711 1183946 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.crt.5787e387 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.crt
	I1208 02:07:00.848794 1183946 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.key.5787e387 -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.key
	I1208 02:07:00.848857 1183946 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.key
	I1208 02:07:00.848876 1183946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.crt with IP's: []
	I1208 02:07:00.926878 1183946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.crt ...
	I1208 02:07:00.926908 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.crt: {Name:mka0fc2bb5d2f394551666424565c570f0511ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:07:00.927085 1183946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.key ...
	I1208 02:07:00.927100 1183946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.key: {Name:mkd107f40d521560c93db9fc1540abb4d8b90e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 02:07:00.927283 1183946 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
	W1208 02:07:00.927329 1183946 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
	I1208 02:07:00.927339 1183946 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 02:07:00.927368 1183946 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
	I1208 02:07:00.927399 1183946 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
	I1208 02:07:00.927428 1183946 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
	I1208 02:07:00.927476 1183946 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
	I1208 02:07:00.928053 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 02:07:00.947248 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1208 02:07:00.967135 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 02:07:00.987139 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 02:07:01.006603 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1208 02:07:01.025601 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 02:07:01.043648 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 02:07:01.060965 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/enable-default-cni-475514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 02:07:01.079305 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
	I1208 02:07:01.097181 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 02:07:01.115566 1183946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
	I1208 02:07:01.135160 1183946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 02:07:01.149175 1183946 ssh_runner.go:195] Run: openssl version
	I1208 02:07:01.155914 1183946 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
	I1208 02:07:01.164408 1183946 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
	I1208 02:07:01.172552 1183946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
	I1208 02:07:01.176642 1183946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 00:22 /usr/share/ca-certificates/8467112.pem
	I1208 02:07:01.176714 1183946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
	I1208 02:07:01.219236 1183946 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 02:07:01.227169 1183946 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
	I1208 02:07:01.234843 1183946 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:07:01.242738 1183946 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 02:07:01.250765 1183946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:07:01.255211 1183946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 00:12 /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:07:01.255289 1183946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 02:07:01.296908 1183946 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 02:07:01.305102 1183946 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 02:07:01.313110 1183946 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
	I1208 02:07:01.323934 1183946 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
	I1208 02:07:01.333163 1183946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
	I1208 02:07:01.337485 1183946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 00:22 /usr/share/ca-certificates/846711.pem
	I1208 02:07:01.337591 1183946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
	I1208 02:07:01.421795 1183946 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 02:07:01.437261 1183946 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
	I1208 02:07:01.446137 1183946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 02:07:01.449887 1183946 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 02:07:01.449987 1183946 kubeadm.go:401] StartCluster: {Name:enable-default-cni-475514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:enable-default-cni-475514 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 02:07:01.450083 1183946 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1208 02:07:01.450154 1183946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 02:07:01.477378 1183946 cri.go:89] found id: ""
	I1208 02:07:01.477461 1183946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 02:07:01.486096 1183946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 02:07:01.494185 1183946 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1208 02:07:01.494279 1183946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 02:07:01.502739 1183946 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 02:07:01.502765 1183946 kubeadm.go:158] found existing configuration files:
	
	I1208 02:07:01.502837 1183946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 02:07:01.510940 1183946 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 02:07:01.511032 1183946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 02:07:01.518943 1183946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 02:07:01.527142 1183946 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 02:07:01.527259 1183946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 02:07:01.535602 1183946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 02:07:01.543514 1183946 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 02:07:01.543604 1183946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 02:07:01.551352 1183946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 02:07:01.560145 1183946 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 02:07:01.560255 1183946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 02:07:01.568345 1183946 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1208 02:07:01.610285 1183946 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 02:07:01.610361 1183946 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 02:07:01.643174 1183946 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1208 02:07:01.643255 1183946 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1208 02:07:01.643296 1183946 kubeadm.go:319] OS: Linux
	I1208 02:07:01.643346 1183946 kubeadm.go:319] CGROUPS_CPU: enabled
	I1208 02:07:01.643398 1183946 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1208 02:07:01.643454 1183946 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1208 02:07:01.643507 1183946 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1208 02:07:01.643566 1183946 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1208 02:07:01.643620 1183946 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1208 02:07:01.643669 1183946 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1208 02:07:01.643721 1183946 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1208 02:07:01.643771 1183946 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1208 02:07:01.720332 1183946 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 02:07:01.720455 1183946 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 02:07:01.720553 1183946 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 02:07:01.726060 1183946 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 02:07:01.733478 1183946 out.go:252]   - Generating certificates and keys ...
	I1208 02:07:01.733579 1183946 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 02:07:01.733650 1183946 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 02:07:02.020242 1183946 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 02:07:02.611109 1183946 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 02:07:03.638108 1183946 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 02:07:04.204082 1183946 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 02:07:04.463341 1183946 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 02:07:04.463525 1183946 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-475514 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 02:07:04.644990 1183946 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 02:07:04.645352 1183946 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-475514 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1208 02:07:05.153002 1183946 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 02:07:05.414400 1183946 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 02:07:05.881627 1183946 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 02:07:05.885695 1183946 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 02:07:07.559396 1183946 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012707347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012722928Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012784722Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012807458Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012980408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.012995694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013007829Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013026414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013056306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013096109Z" level=info msg="Connect containerd service"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.013397585Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.014248932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.024764086Z" level=info msg="Start subscribing containerd event"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.024952617Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.025010275Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.025073168Z" level=info msg="Start recovering state"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046219867Z" level=info msg="Start event monitor"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046299482Z" level=info msg="Start cni network conf syncer for default"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046310116Z" level=info msg="Start streaming server"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046320315Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046329185Z" level=info msg="runtime interface starting up..."
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046337029Z" level=info msg="starting plugins..."
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.046369292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 08 01:47:31 no-preload-536520 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 08 01:47:31 no-preload-536520 containerd[556]: time="2025-12-08T01:47:31.048165739Z" level=info msg="containerd successfully booted in 0.067149s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1208 02:07:08.772437   10377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:07:08.773641   10377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:07:08.774661   10377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:07:08.775758   10377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1208 02:07:08.776744   10377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:07:08 up  6:49,  0 user,  load average: 2.31, 1.69, 1.48
	Linux no-preload-536520 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 08 02:07:05 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:07:05 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1563.
	Dec 08 02:07:05 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:05 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:05 no-preload-536520 kubelet[10242]: E1208 02:07:05.905676   10242 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:07:05 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:07:05 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:07:06 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1564.
	Dec 08 02:07:06 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:06 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:06 no-preload-536520 kubelet[10248]: E1208 02:07:06.691721   10248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:07:06 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:07:06 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:07:07 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1565.
	Dec 08 02:07:07 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:07 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:07 no-preload-536520 kubelet[10267]: E1208 02:07:07.533070   10267 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:07:07 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:07:07 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 08 02:07:08 no-preload-536520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1566.
	Dec 08 02:07:08 no-preload-536520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:08 no-preload-536520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 08 02:07:08 no-preload-536520 kubelet[10312]: E1208 02:07:08.472281   10312 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 08 02:07:08 no-preload-536520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 08 02:07:08 no-preload-536520 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-536520 -n no-preload-536520: exit status 2 (430.876815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-536520" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (272.22s)
E1208 02:09:16.296650  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (345/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.35
9 TestDownloadOnly/v1.28.0/DeleteAll 0.37
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.25
12 TestDownloadOnly/v1.34.2/json-events 5.13
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.75
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.12
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.22
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 138.37
38 TestAddons/serial/Volcano 42.17
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/serial/GCPAuth/FakeCredentials 9.87
44 TestAddons/parallel/Registry 17.4
45 TestAddons/parallel/RegistryCreds 0.84
46 TestAddons/parallel/Ingress 19.43
47 TestAddons/parallel/InspektorGadget 12.24
48 TestAddons/parallel/MetricsServer 5.89
50 TestAddons/parallel/CSI 42.83
51 TestAddons/parallel/Headlamp 16.97
52 TestAddons/parallel/CloudSpanner 6.62
53 TestAddons/parallel/LocalPath 52.78
54 TestAddons/parallel/NvidiaDevicePlugin 6.63
55 TestAddons/parallel/Yakd 11.94
57 TestAddons/StoppedEnableDisable 12.38
58 TestCertOptions 33.88
59 TestCertExpiration 223.17
61 TestForceSystemdFlag 37.33
62 TestForceSystemdEnv 37.14
63 TestDockerEnvContainerd 45.22
67 TestErrorSpam/setup 30.73
68 TestErrorSpam/start 0.86
69 TestErrorSpam/status 1.09
70 TestErrorSpam/pause 1.75
71 TestErrorSpam/unpause 1.95
72 TestErrorSpam/stop 1.66
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.56
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 7.37
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
84 TestFunctional/serial/CacheCmd/cache/add_local 1.26
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.08
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
92 TestFunctional/serial/ExtraConfig 41.85
93 TestFunctional/serial/ComponentHealth 0.09
94 TestFunctional/serial/LogsCmd 1.55
95 TestFunctional/serial/LogsFileCmd 1.53
96 TestFunctional/serial/InvalidService 4.32
98 TestFunctional/parallel/ConfigCmd 0.46
99 TestFunctional/parallel/DashboardCmd 9.11
100 TestFunctional/parallel/DryRun 0.59
101 TestFunctional/parallel/InternationalLanguage 0.28
102 TestFunctional/parallel/StatusCmd 1.26
106 TestFunctional/parallel/ServiceCmdConnect 7.68
107 TestFunctional/parallel/AddonsCmd 0.2
108 TestFunctional/parallel/PersistentVolumeClaim 24.88
110 TestFunctional/parallel/SSHCmd 0.76
111 TestFunctional/parallel/CpCmd 2.49
113 TestFunctional/parallel/FileSync 0.31
114 TestFunctional/parallel/CertSync 2.33
118 TestFunctional/parallel/NodeLabels 0.25
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
122 TestFunctional/parallel/License 0.55
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.52
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.33
135 TestFunctional/parallel/ServiceCmd/List 0.53
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.59
139 TestFunctional/parallel/ServiceCmd/Format 0.55
140 TestFunctional/parallel/ServiceCmd/URL 0.52
141 TestFunctional/parallel/ProfileCmd/profile_list 0.63
142 TestFunctional/parallel/MountCmd/any-port 10.13
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.62
144 TestFunctional/parallel/MountCmd/specific-port 1.8
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
146 TestFunctional/parallel/Version/short 0.08
147 TestFunctional/parallel/Version/components 0.82
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.74
153 TestFunctional/parallel/ImageCommands/Setup 0.69
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
157 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
158 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
159 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.45
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.04
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 2.06
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.96
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.05
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.48
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.19
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.75
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.27
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.31
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.78
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.29
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.42
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.64
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.43
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.53
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.26
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.24
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.64
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.12
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.09
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.5
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.69
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 177.54
265 TestMultiControlPlane/serial/DeployApp 7.55
266 TestMultiControlPlane/serial/PingHostFromPods 1.67
267 TestMultiControlPlane/serial/AddWorkerNode 60.18
268 TestMultiControlPlane/serial/NodeLabels 0.14
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.12
270 TestMultiControlPlane/serial/CopyFile 21.09
271 TestMultiControlPlane/serial/StopSecondaryNode 13.33
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
273 TestMultiControlPlane/serial/RestartSecondaryNode 14.83
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.23
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.06
276 TestMultiControlPlane/serial/DeleteSecondaryNode 11.13
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.82
278 TestMultiControlPlane/serial/StopCluster 36.49
279 TestMultiControlPlane/serial/RestartCluster 61.97
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.84
281 TestMultiControlPlane/serial/AddSecondaryNode 101.62
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.19
287 TestJSONOutput/start/Command 81.85
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.73
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.64
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 6.03
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.24
312 TestKicCustomNetwork/create_custom_network 61.36
313 TestKicCustomNetwork/use_default_bridge_network 33.76
314 TestKicExistingNetwork 34.18
315 TestKicCustomSubnet 37.79
316 TestKicStaticIP 36.97
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 73.1
321 TestMountStart/serial/StartWithMountFirst 8.49
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.6
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.75
326 TestMountStart/serial/VerifyMountPostDelete 0.29
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 7.71
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 110.3
333 TestMultiNode/serial/DeployApp2Nodes 5.41
334 TestMultiNode/serial/PingHostFrom2Pods 1.01
335 TestMultiNode/serial/AddNode 56.47
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.76
339 TestMultiNode/serial/StopNode 2.47
340 TestMultiNode/serial/StartAfterStop 8.04
341 TestMultiNode/serial/RestartKeepsNodes 74.95
342 TestMultiNode/serial/DeleteNode 5.82
343 TestMultiNode/serial/StopMultiNode 24.24
344 TestMultiNode/serial/RestartMultiNode 49.29
345 TestMultiNode/serial/ValidateNameConflict 37.07
350 TestPreload 119.62
352 TestScheduledStopUnix 107.7
355 TestInsufficientStorage 12.95
356 TestRunningBinaryUpgrade 73
359 TestMissingContainerUpgrade 171.63
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
362 TestNoKubernetes/serial/StartWithK8s 43.1
363 TestNoKubernetes/serial/StartWithStopK8s 11.08
364 TestNoKubernetes/serial/Start 9.13
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
367 TestNoKubernetes/serial/ProfileList 1.36
368 TestNoKubernetes/serial/Stop 1.43
369 TestNoKubernetes/serial/StartNoArgs 7.2
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
371 TestStoppedBinaryUpgrade/Setup 11.56
372 TestStoppedBinaryUpgrade/Upgrade 302.2
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.04
382 TestPause/serial/Start 79.2
383 TestPause/serial/SecondStartNoReconfiguration 6.36
384 TestPause/serial/Pause 0.72
385 TestPause/serial/VerifyStatus 0.34
386 TestPause/serial/Unpause 0.67
387 TestPause/serial/PauseAgain 0.91
388 TestPause/serial/DeletePaused 2.88
389 TestPause/serial/VerifyDeletedResources 0.48
397 TestNetworkPlugins/group/false 3.74
402 TestStartStop/group/old-k8s-version/serial/FirstStart 62.89
405 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
406 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.47
407 TestStartStop/group/old-k8s-version/serial/Stop 12.4
408 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
409 TestStartStop/group/old-k8s-version/serial/SecondStart 52.74
410 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
412 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
413 TestStartStop/group/old-k8s-version/serial/Pause 3.26
415 TestStartStop/group/embed-certs/serial/FirstStart 50.77
416 TestStartStop/group/embed-certs/serial/DeployApp 8.35
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
418 TestStartStop/group/embed-certs/serial/Stop 12.14
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
420 TestStartStop/group/embed-certs/serial/SecondStart 53.69
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
424 TestStartStop/group/embed-certs/serial/Pause 3.12
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.7
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.11
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.37
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.23
440 TestStartStop/group/no-preload/serial/Stop 1.3
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.31
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
453 TestNetworkPlugins/group/auto/Start 78.03
454 TestNetworkPlugins/group/auto/KubeletFlags 0.31
455 TestNetworkPlugins/group/auto/NetCatPod 8.27
456 TestNetworkPlugins/group/auto/DNS 0.17
457 TestNetworkPlugins/group/auto/Localhost 0.21
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 81.06
461 TestNetworkPlugins/group/kindnet/ControllerPod 6
462 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
463 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
464 TestNetworkPlugins/group/kindnet/DNS 0.18
465 TestNetworkPlugins/group/kindnet/Localhost 0.16
466 TestNetworkPlugins/group/kindnet/HairPin 0.16
467 TestNetworkPlugins/group/calico/Start 58.35
468 TestNetworkPlugins/group/calico/ControllerPod 6
469 TestNetworkPlugins/group/calico/KubeletFlags 0.32
470 TestNetworkPlugins/group/calico/NetCatPod 10.29
471 TestNetworkPlugins/group/calico/DNS 0.18
472 TestNetworkPlugins/group/calico/Localhost 0.16
473 TestNetworkPlugins/group/calico/HairPin 0.17
474 TestNetworkPlugins/group/custom-flannel/Start 59.5
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.26
477 TestNetworkPlugins/group/custom-flannel/DNS 0.17
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
480 TestNetworkPlugins/group/enable-default-cni/Start 80
481 TestNetworkPlugins/group/flannel/Start 60.94
482 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
483 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
484 TestNetworkPlugins/group/flannel/ControllerPod 6.01
485 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
486 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
487 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
488 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
489 TestNetworkPlugins/group/flannel/NetCatPod 9.29
490 TestNetworkPlugins/group/flannel/DNS 0.24
491 TestNetworkPlugins/group/flannel/Localhost 0.23
492 TestNetworkPlugins/group/flannel/HairPin 0.24
493 TestNetworkPlugins/group/bridge/Start 42.9
494 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
495 TestNetworkPlugins/group/bridge/NetCatPod 10.28
496 TestNetworkPlugins/group/bridge/DNS 0.17
497 TestNetworkPlugins/group/bridge/Localhost 0.16
498 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (9.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-150411 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-150411 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.557391169s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1208 00:11:58.362335  846711 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1208 00:11:58.362417  846711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-150411
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-150411: exit status 85 (344.748451ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-150411 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-150411 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:11:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:11:48.854842  846716 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:11:48.854966  846716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:11:48.854972  846716 out.go:374] Setting ErrFile to fd 2...
	I1208 00:11:48.854980  846716 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:11:48.855380  846716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	W1208 00:11:48.855556  846716 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22054-843440/.minikube/config/config.json: open /home/jenkins/minikube-integration/22054-843440/.minikube/config/config.json: no such file or directory
	I1208 00:11:48.855979  846716 out.go:368] Setting JSON to true
	I1208 00:11:48.856880  846716 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17662,"bootTime":1765135047,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:11:48.856976  846716 start.go:143] virtualization:  
	I1208 00:11:48.862802  846716 out.go:99] [download-only-150411] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1208 00:11:48.863030  846716 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball: no such file or directory
	I1208 00:11:48.863159  846716 notify.go:221] Checking for updates...
	I1208 00:11:48.867281  846716 out.go:171] MINIKUBE_LOCATION=22054
	I1208 00:11:48.870559  846716 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:11:48.873641  846716 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:11:48.876698  846716 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:11:48.879687  846716 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1208 00:11:48.885555  846716 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 00:11:48.885836  846716 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:11:48.908702  846716 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:11:48.908820  846716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:11:48.970964  846716 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-08 00:11:48.961337228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:11:48.971071  846716 docker.go:319] overlay module found
	I1208 00:11:48.974157  846716 out.go:99] Using the docker driver based on user configuration
	I1208 00:11:48.974216  846716 start.go:309] selected driver: docker
	I1208 00:11:48.974229  846716 start.go:927] validating driver "docker" against <nil>
	I1208 00:11:48.974354  846716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:11:49.035370  846716 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-08 00:11:49.02623238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:11:49.035539  846716 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:11:49.035829  846716 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1208 00:11:49.036009  846716 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 00:11:49.039262  846716 out.go:171] Using Docker driver with root privileges
	I1208 00:11:49.042377  846716 cni.go:84] Creating CNI manager for ""
	I1208 00:11:49.042486  846716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:11:49.042502  846716 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:11:49.042590  846716 start.go:353] cluster config:
	{Name:download-only-150411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-150411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:11:49.045704  846716 out.go:99] Starting "download-only-150411" primary control-plane node in "download-only-150411" cluster
	I1208 00:11:49.045724  846716 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:11:49.048545  846716 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:11:49.048587  846716 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1208 00:11:49.048742  846716 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:11:49.067359  846716 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:11:49.067382  846716 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:11:49.067534  846716 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:11:49.067632  846716 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:11:49.101761  846716 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:11:49.101793  846716 cache.go:65] Caching tarball of preloaded images
	I1208 00:11:49.101976  846716 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1208 00:11:49.105346  846716 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1208 00:11:49.105386  846716 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1208 00:11:49.188871  846716 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1208 00:11:49.189014  846716 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1208 00:11:53.612880  846716 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1208 00:11:53.613349  846716 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/download-only-150411/config.json ...
	I1208 00:11:53.613409  846716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/download-only-150411/config.json: {Name:mkd3e731b08ae341e715f06fa20e3850b4abf33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:11:53.613609  846716 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1208 00:11:53.613838  846716 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-150411 host does not exist
	  To start a cluster, run: "minikube start -p download-only-150411"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-150411
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (5.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-650407 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-650407 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.127689005s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (5.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1208 00:12:04.461701  846711 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1208 00:12:04.461741  846711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-650407
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-650407: exit status 85 (87.764252ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-150411 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-150411 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │ 08 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-150411                                                                                                                                                               │ download-only-150411 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │ 08 Dec 25 00:11 UTC │
	│ start   │ -o=json --download-only -p download-only-650407 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-650407 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:11:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:11:59.380626  846911 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:11:59.380759  846911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:11:59.380769  846911 out.go:374] Setting ErrFile to fd 2...
	I1208 00:11:59.380774  846911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:11:59.381032  846911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:11:59.381460  846911 out.go:368] Setting JSON to true
	I1208 00:11:59.382306  846911 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17672,"bootTime":1765135047,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:11:59.382371  846911 start.go:143] virtualization:  
	I1208 00:11:59.410639  846911 out.go:99] [download-only-650407] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:11:59.411042  846911 notify.go:221] Checking for updates...
	I1208 00:11:59.439938  846911 out.go:171] MINIKUBE_LOCATION=22054
	I1208 00:11:59.468114  846911 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:11:59.500373  846911 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:11:59.531886  846911 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:11:59.562933  846911 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1208 00:11:59.610708  846911 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 00:11:59.611180  846911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:11:59.640702  846911 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:11:59.640841  846911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:11:59.699160  846911 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:11:59.688341341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:11:59.699267  846911 docker.go:319] overlay module found
	I1208 00:11:59.722595  846911 out.go:99] Using the docker driver based on user configuration
	I1208 00:11:59.722643  846911 start.go:309] selected driver: docker
	I1208 00:11:59.722650  846911 start.go:927] validating driver "docker" against <nil>
	I1208 00:11:59.722774  846911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:11:59.778945  846911 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:11:59.769452507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:11:59.779113  846911 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:11:59.779424  846911 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1208 00:11:59.779582  846911 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 00:11:59.803393  846911 out.go:171] Using Docker driver with root privileges
	I1208 00:11:59.820119  846911 cni.go:84] Creating CNI manager for ""
	I1208 00:11:59.820217  846911 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1208 00:11:59.820235  846911 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1208 00:11:59.820399  846911 start.go:353] cluster config:
	{Name:download-only-650407 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-650407 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:11:59.851127  846911 out.go:99] Starting "download-only-650407" primary control-plane node in "download-only-650407" cluster
	I1208 00:11:59.851178  846911 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1208 00:11:59.883459  846911 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1208 00:11:59.883529  846911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 00:11:59.883671  846911 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1208 00:11:59.903878  846911 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1208 00:11:59.903902  846911 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1208 00:11:59.904013  846911 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1208 00:11:59.904034  846911 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1208 00:11:59.904039  846911 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1208 00:11:59.904052  846911 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1208 00:11:59.943254  846911 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1208 00:11:59.943289  846911 cache.go:65] Caching tarball of preloaded images
	I1208 00:11:59.943462  846911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 00:11:59.948030  846911 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1208 00:11:59.948071  846911 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1208 00:12:00.032711  846911 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1208 00:12:00.032830  846911 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1208 00:12:03.651486  846911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1208 00:12:03.651890  846911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/download-only-650407/config.json ...
	I1208 00:12:03.651925  846911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/download-only-650407/config.json: {Name:mk0edd7f025d10a5e7cf04e4e17460f2d36a7b68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 00:12:03.652154  846911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1208 00:12:03.652323  846911 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-650407 host does not exist
	  To start a cluster, run: "minikube start -p download-only-650407"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-650407
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-758055 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-758055 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.753132093s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1208 00:12:09.659237  846711 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1208 00:12:09.659272  846711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-758055
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-758055: exit status 85 (118.758336ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-150411 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-150411 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │ 08 Dec 25 00:11 UTC │
	│ delete  │ -p download-only-150411                                                                                                                                                                      │ download-only-150411 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │ 08 Dec 25 00:11 UTC │
	│ start   │ -o=json --download-only -p download-only-650407 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-650407 │ jenkins │ v1.37.0 │ 08 Dec 25 00:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ delete  │ -p download-only-650407                                                                                                                                                                      │ download-only-650407 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │ 08 Dec 25 00:12 UTC │
	│ start   │ -o=json --download-only -p download-only-758055 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-758055 │ jenkins │ v1.37.0 │ 08 Dec 25 00:12 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 00:12:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 00:12:04.947269  847109 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:12:04.947415  847109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:04.947427  847109 out.go:374] Setting ErrFile to fd 2...
	I1208 00:12:04.947433  847109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:12:04.947812  847109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:12:04.948764  847109 out.go:368] Setting JSON to true
	I1208 00:12:04.949594  847109 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":17678,"bootTime":1765135047,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:12:04.949674  847109 start.go:143] virtualization:  
	I1208 00:12:04.953133  847109 out.go:99] [download-only-758055] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:12:04.953415  847109 notify.go:221] Checking for updates...
	I1208 00:12:04.956206  847109 out.go:171] MINIKUBE_LOCATION=22054
	I1208 00:12:04.959300  847109 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:12:04.962278  847109 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:12:04.965198  847109 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:12:04.968131  847109 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1208 00:12:04.973839  847109 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 00:12:04.974225  847109 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:12:05.016794  847109 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:12:05.016928  847109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:05.072615  847109 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:05.063071802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:05.072732  847109 docker.go:319] overlay module found
	I1208 00:12:05.075798  847109 out.go:99] Using the docker driver based on user configuration
	I1208 00:12:05.075858  847109 start.go:309] selected driver: docker
	I1208 00:12:05.075870  847109 start.go:927] validating driver "docker" against <nil>
	I1208 00:12:05.075995  847109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:12:05.133946  847109 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-08 00:12:05.124341636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:12:05.134118  847109 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 00:12:05.134402  847109 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1208 00:12:05.134701  847109 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 00:12:05.137924  847109 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-758055 host does not exist
	  To start a cluster, run: "minikube start -p download-only-758055"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-758055
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1208 00:12:11.029829  846711 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-383556 --alsologtostderr --binary-mirror http://127.0.0.1:42055 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-383556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-383556
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-011456
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-011456: exit status 85 (60.231212ms)

                                                
                                                
-- stdout --
	* Profile "addons-011456" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-011456"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-011456
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-011456: exit status 85 (80.112439ms)

                                                
                                                
-- stdout --
	* Profile "addons-011456" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-011456"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (138.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-011456 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-011456 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.364759029s)
--- PASS: TestAddons/Setup (138.37s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.17s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 56.399073ms
addons_test.go:876: volcano-admission stabilized in 56.524875ms
addons_test.go:868: volcano-scheduler stabilized in 56.566861ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-kj225" [9fb7ca60-7f62-44e0-a197-e91b9c62322c] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.006526575s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-qwmp2" [142f8724-1bfe-4a71-8b16-ed15c45c918b] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004485489s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-2pv2g" [31a90250-5dd4-4ce1-a6d4-f3352a7655fb] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004085126s
addons_test.go:903: (dbg) Run:  kubectl --context addons-011456 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-011456 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-011456 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [0c8f0d6b-2b74-4293-aad7-a8998107f666] Pending
helpers_test.go:352: "test-job-nginx-0" [0c8f0d6b-2b74-4293-aad7-a8998107f666] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [0c8f0d6b-2b74-4293-aad7-a8998107f666] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004079129s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable volcano --alsologtostderr -v=1: (12.503977056s)
--- PASS: TestAddons/serial/Volcano (42.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-011456 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-011456 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-011456 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-011456 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ee450f2d-8a91-4c9b-8c9d-1474a3158b2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ee450f2d-8a91-4c9b-8c9d-1474a3158b2c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003373797s
addons_test.go:694: (dbg) Run:  kubectl --context addons-011456 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-011456 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-011456 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-011456 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.87s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.95997ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-wckxj" [103201bf-c678-4987-b151-ed400f0b4529] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009938172s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-448s2" [e6890da6-3147-446a-86e3-5756d045851f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003549993s
addons_test.go:392: (dbg) Run:  kubectl --context addons-011456 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-011456 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-011456 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.137751698s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 ip
2025/12/08 00:15:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.40s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.84s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.516273ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-011456
addons_test.go:332: (dbg) Run:  kubectl --context addons-011456 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-011456 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-011456 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-011456 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2d7c2989-b007-4c7c-a1f1-efbdc2198d18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2d7c2989-b007-4c7c-a1f1-efbdc2198d18] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004683054s
I1208 00:17:04.431750  846711 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-011456 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable ingress-dns --alsologtostderr -v=1: (1.690401565s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable ingress --alsologtostderr -v=1: (7.932726912s)
--- PASS: TestAddons/parallel/Ingress (19.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-4qp28" [84552813-af10-4bae-ad5d-b3ae28bcd0ae] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003536698s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable inspektor-gadget --alsologtostderr -v=1: (6.234730152s)
--- PASS: TestAddons/parallel/InspektorGadget (12.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.634961ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-f8xxr" [857caa2c-acc8-4883-bd1a-98291bb1c94d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003752359s
addons_test.go:463: (dbg) Run:  kubectl --context addons-011456 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1208 00:15:43.503332  846711 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1208 00:15:43.507283  846711 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1208 00:15:43.507314  846711 kapi.go:107] duration metric: took 6.686564ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.69733ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-011456 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-011456 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [20c855dd-5c80-4b9b-bba6-e7baf7b8d0a8] Pending
helpers_test.go:352: "task-pv-pod" [20c855dd-5c80-4b9b-bba6-e7baf7b8d0a8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [20c855dd-5c80-4b9b-bba6-e7baf7b8d0a8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003770975s
addons_test.go:572: (dbg) Run:  kubectl --context addons-011456 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-011456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-011456 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-011456 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-011456 delete pod task-pv-pod: (1.440187322s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-011456 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-011456 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-011456 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [31e60a7e-db59-46d5-9fd3-de6fac579810] Pending
helpers_test.go:352: "task-pv-pod-restore" [31e60a7e-db59-46d5-9fd3-de6fac579810] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [31e60a7e-db59-46d5-9fd3-de6fac579810] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00386328s
addons_test.go:614: (dbg) Run:  kubectl --context addons-011456 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-011456 delete pod task-pv-pod-restore: (1.327929794s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-011456 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-011456 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.060194145s)
--- PASS: TestAddons/parallel/CSI (42.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-011456 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-011456 --alsologtostderr -v=1: (1.162313225s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-zft7g" [8f544563-9433-4c37-a41f-95d318e2a64c] Pending
helpers_test.go:352: "headlamp-dfcdc64b-zft7g" [8f544563-9433-4c37-a41f-95d318e2a64c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-zft7g" [8f544563-9433-4c37-a41f-95d318e2a64c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003232047s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable headlamp --alsologtostderr -v=1: (5.805689456s)
--- PASS: TestAddons/parallel/Headlamp (16.97s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-rq2s5" [f95b2f05-acf1-40a2-b052-87ba2e046e36] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003525828s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-011456 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-011456 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2a5ff650-5d61-4629-b0fe-eee95c3d474e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2a5ff650-5d61-4629-b0fe-eee95c3d474e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2a5ff650-5d61-4629-b0fe-eee95c3d474e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004994688s
addons_test.go:967: (dbg) Run:  kubectl --context addons-011456 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 ssh "cat /opt/local-path-provisioner/pvc-e7035713-8eb1-440c-9e07-f52e9f3241f5_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-011456 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-011456 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.12116678s)
--- PASS: TestAddons/parallel/LocalPath (52.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ncmsv" [130dcd82-a235-4c8d-aae9-460fc50965cb] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003117007s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-mtz46" [f6bddbe4-9e68-4d0b-bd65-6cfaed4af9e1] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004335448s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-011456 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-011456 addons disable yakd --alsologtostderr -v=1: (5.930070785s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-011456
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-011456: (12.097609631s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-011456
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-011456
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-011456
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestCertOptions (33.88s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-296212 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-296212 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (30.915595884s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-296212 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-296212 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-296212 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-296212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-296212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-296212: (2.148105632s)
--- PASS: TestCertOptions (33.88s)

                                                
                                    
x
+
TestCertExpiration (223.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-517238 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-517238 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (32.403672429s)
E1208 01:34:25.030608  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:34:30.128281  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-517238 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-517238 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.190782506s)
helpers_test.go:175: Cleaning up "cert-expiration-517238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-517238
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-517238: (2.572724902s)
--- PASS: TestCertExpiration (223.17s)

                                                
                                    
x
+
TestForceSystemdFlag (37.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-738490 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-738490 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.879074588s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-738490 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-738490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-738490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-738490: (2.120752842s)
--- PASS: TestForceSystemdFlag (37.33s)

                                                
                                    
x
+
TestForceSystemdEnv (37.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-629029 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1208 01:32:51.299827  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-629029 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.722605447s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-629029 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-629029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-629029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-629029: (2.09809523s)
--- PASS: TestForceSystemdEnv (37.14s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.22s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-915904 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-915904 --driver=docker  --container-runtime=containerd: (28.335836603s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-915904"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-915904": (1.219250377s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgnBs9A15oDi/agent.866223" SSH_AGENT_PID="866224" DOCKER_HOST=ssh://docker@127.0.0.1:33543 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgnBs9A15oDi/agent.866223" SSH_AGENT_PID="866224" DOCKER_HOST=ssh://docker@127.0.0.1:33543 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgnBs9A15oDi/agent.866223" SSH_AGENT_PID="866224" DOCKER_HOST=ssh://docker@127.0.0.1:33543 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.731813869s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgnBs9A15oDi/agent.866223" SSH_AGENT_PID="866224" DOCKER_HOST=ssh://docker@127.0.0.1:33543 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-bgnBs9A15oDi/agent.866223" SSH_AGENT_PID="866224" DOCKER_HOST=ssh://docker@127.0.0.1:33543 docker image ls": (1.097816759s)
helpers_test.go:175: Cleaning up "dockerenv-915904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-915904
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-915904: (2.457532911s)
--- PASS: TestDockerEnvContainerd (45.22s)

                                                
                                    
x
+
TestErrorSpam/setup (30.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-276861 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-276861 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-276861 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-276861 --driver=docker  --container-runtime=containerd: (30.727511352s)
--- PASS: TestErrorSpam/setup (30.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 stop: (1.456014977s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-276861 --log_dir /tmp/nospam-276861 stop
--- PASS: TestErrorSpam/stop (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-932121 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1208 00:19:30.135133  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.141948  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.153306  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.174658  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.216025  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.297384  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.458828  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:30.780288  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:31.421648  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:32.702895  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:35.265841  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:40.388116  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:19:50.629658  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:20:11.111080  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-932121 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m20.558969728s)
--- PASS: TestFunctional/serial/StartWithProxy (80.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1208 00:20:21.756176  846711 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-932121 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-932121 --alsologtostderr -v=8: (7.36810833s)
functional_test.go:678: soft start took 7.37409907s for "functional-932121" cluster.
I1208 00:20:29.124615  846711 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (7.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-932121 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 cache add registry.k8s.io/pause:3.1: (1.366930687s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 cache add registry.k8s.io/pause:3.3: (1.125552338s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 cache add registry.k8s.io/pause:latest: (1.047399198s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-932121 /tmp/TestFunctionalserialCacheCmdcacheadd_local2253133255/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cache add minikube-local-cache-test:functional-932121
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cache delete minikube-local-cache-test:functional-932121
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-932121
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (314.857463ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 cache reload: (1.070290923s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 kubectl -- --context functional-932121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-932121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-932121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1208 00:20:52.073032  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-932121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.844962877s)
functional_test.go:776: restart took 41.845050123s for "functional-932121" cluster.
I1208 00:21:18.804847  846711 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (41.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-932121 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 logs: (1.553062074s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 logs --file /tmp/TestFunctionalserialLogsFileCmd468309103/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 logs --file /tmp/TestFunctionalserialLogsFileCmd468309103/001/logs.txt: (1.527057283s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-932121 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-932121
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-932121: exit status 115 (414.557042ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31449 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-932121 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 config get cpus: exit status 14 (78.434038ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 config get cpus: exit status 14 (86.546611ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-932121 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-932121 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 881529: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-932121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-932121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (291.77041ms)

                                                
                                                
-- stdout --
	* [functional-932121] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:21:57.127385  881200 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:21:57.127562  881200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:21:57.127568  881200 out.go:374] Setting ErrFile to fd 2...
	I1208 00:21:57.127574  881200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:21:57.127834  881200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:21:57.128218  881200 out.go:368] Setting JSON to false
	I1208 00:21:57.129293  881200 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18270,"bootTime":1765135047,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:21:57.129387  881200 start.go:143] virtualization:  
	I1208 00:21:57.133577  881200 out.go:179] * [functional-932121] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:21:57.136505  881200 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:21:57.136655  881200 notify.go:221] Checking for updates...
	I1208 00:21:57.142245  881200 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:21:57.145522  881200 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:21:57.148410  881200 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:21:57.151241  881200 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:21:57.154236  881200 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:21:57.157563  881200 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 00:21:57.158217  881200 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:21:57.213168  881200 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:21:57.213287  881200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:21:57.320523  881200 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-08 00:21:57.307372781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:21:57.320635  881200 docker.go:319] overlay module found
	I1208 00:21:57.326974  881200 out.go:179] * Using the docker driver based on existing profile
	I1208 00:21:57.329714  881200 start.go:309] selected driver: docker
	I1208 00:21:57.329739  881200 start.go:927] validating driver "docker" against &{Name:functional-932121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-932121 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:21:57.329843  881200 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:21:57.335442  881200 out.go:203] 
	W1208 00:21:57.338206  881200 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 00:21:57.340899  881200 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-932121 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-932121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-932121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (280.60593ms)

                                                
                                                
-- stdout --
	* [functional-932121] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:21:56.863641  881100 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:21:56.863861  881100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:21:56.863875  881100 out.go:374] Setting ErrFile to fd 2...
	I1208 00:21:56.863880  881100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:21:56.864844  881100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:21:56.865231  881100 out.go:368] Setting JSON to false
	I1208 00:21:56.866361  881100 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18270,"bootTime":1765135047,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:21:56.866528  881100 start.go:143] virtualization:  
	I1208 00:21:56.870184  881100 out.go:179] * [functional-932121] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1208 00:21:56.873210  881100 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:21:56.873383  881100 notify.go:221] Checking for updates...
	I1208 00:21:56.879183  881100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:21:56.882336  881100 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:21:56.885548  881100 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:21:56.888625  881100 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:21:56.891679  881100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:21:56.895093  881100 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 00:21:56.895693  881100 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:21:56.929086  881100 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:21:56.929232  881100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:21:57.041473  881100 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-08 00:21:57.031371241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:21:57.041587  881100 docker.go:319] overlay module found
	I1208 00:21:57.044642  881100 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1208 00:21:57.047454  881100 start.go:309] selected driver: docker
	I1208 00:21:57.047479  881100 start.go:927] validating driver "docker" against &{Name:functional-932121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-932121 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:21:57.047585  881100 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:21:57.051043  881100 out.go:203] 
	W1208 00:21:57.053944  881100 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 00:21:57.056750  881100 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-932121 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-932121 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-vhxll" [dd9c7c12-952d-4866-8ba3-30d0cbfd019b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-vhxll" [dd9c7c12-952d-4866-8ba3-30d0cbfd019b] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005235341s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31828
functional_test.go:1680: http://192.168.49.2:31828: success! body:
Request served by hello-node-connect-7d85dfc575-vhxll

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31828
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5e4c18fe-94d5-4e55-a5ee-710629dd6d02] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003771231s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-932121 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-932121 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-932121 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-932121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9adf73fb-ef5f-4abc-91ab-d0906e57021d] Pending
helpers_test.go:352: "sp-pod" [9adf73fb-ef5f-4abc-91ab-d0906e57021d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9adf73fb-ef5f-4abc-91ab-d0906e57021d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00396007s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-932121 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-932121 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-932121 delete -f testdata/storage-provisioner/pod.yaml: (1.647896463s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-932121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [aee4c70e-ecb1-46d3-9b6d-05d7c2f37cf0] Pending
helpers_test.go:352: "sp-pod" [aee4c70e-ecb1-46d3-9b6d-05d7c2f37cf0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [aee4c70e-ecb1-46d3-9b6d-05d7c2f37cf0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003543649s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-932121 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh -n functional-932121 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cp functional-932121:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3849994496/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh -n functional-932121 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh -n functional-932121 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/846711/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /etc/test/nested/copy/846711/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/846711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /etc/ssl/certs/846711.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/846711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /usr/share/ca-certificates/846711.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8467112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /etc/ssl/certs/8467112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8467112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /usr/share/ca-certificates/8467112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-932121 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh "sudo systemctl is-active docker": exit status 1 (405.664219ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh "sudo systemctl is-active crio": exit status 1 (422.357602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-932121 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-932121 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-932121 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 878660: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-932121 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-932121 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-932121 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b04c7c4b-8650-4700-9d0d-11a9feb1557d] Pending
helpers_test.go:352: "nginx-svc" [b04c7c4b-8650-4700-9d0d-11a9feb1557d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003452135s
I1208 00:21:36.741397  846711 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-932121 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.224.0 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-932121 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-932121 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-932121 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qq6gp" [539ec2e2-57de-45b6-8fce-006ff55a0a00] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-qq6gp" [539ec2e2-57de-45b6-8fce-006ff55a0a00] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003045738s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 service list -o json
functional_test.go:1504: Took "526.624073ms" to run "out/minikube-linux-arm64 -p functional-932121 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30531
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30531
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "498.545654ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "134.011606ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdany-port3816395391/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765153314769370581" to /tmp/TestFunctionalparallelMountCmdany-port3816395391/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765153314769370581" to /tmp/TestFunctionalparallelMountCmdany-port3816395391/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765153314769370581" to /tmp/TestFunctionalparallelMountCmdany-port3816395391/001/test-1765153314769370581
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (506.557869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:21:55.277030  846711 retry.go:31] will retry after 720.890663ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 00:21 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 00:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 00:21 test-1765153314769370581
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh cat /mount-9p/test-1765153314769370581
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-932121 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9fb7599b-fc96-461f-b146-0f419aba7d54] Pending
helpers_test.go:352: "busybox-mount" [9fb7599b-fc96-461f-b146-0f419aba7d54] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9fb7599b-fc96-461f-b146-0f419aba7d54] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9fb7599b-fc96-461f-b146-0f419aba7d54] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00371387s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-932121 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdany-port3816395391/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "552.902689ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "70.120481ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdspecific-port3134808352/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.439656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:22:05.260416  846711 retry.go:31] will retry after 323.20064ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdspecific-port3134808352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "sudo umount -f /mount-9p"
2025/12/08 00:22:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh "sudo umount -f /mount-9p": exit status 1 (300.627834ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-932121 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdspecific-port3134808352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963778103/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963778103/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963778103/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-932121 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963778103/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963778103/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-932121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2963778103/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-932121 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-932121
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-932121
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-932121 image ls --format short --alsologtostderr:
I1208 00:22:15.315412  884349 out.go:360] Setting OutFile to fd 1 ...
I1208 00:22:15.315600  884349 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.315615  884349 out.go:374] Setting ErrFile to fd 2...
I1208 00:22:15.315635  884349 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.315928  884349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:22:15.316644  884349 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.316819  884349 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.317389  884349 cli_runner.go:164] Run: docker container inspect functional-932121 --format={{.State.Status}}
I1208 00:22:15.340882  884349 ssh_runner.go:195] Run: systemctl --version
I1208 00:22:15.340946  884349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932121
I1208 00:22:15.367197  884349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-932121/id_rsa Username:docker}
I1208 00:22:15.482843  884349 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-932121 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-932121  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/minikube-local-cache-test │ functional-932121  │ sha256:ab3bd7 │ 990B   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-932121 image ls --format table --alsologtostderr:
I1208 00:22:15.626343  884431 out.go:360] Setting OutFile to fd 1 ...
I1208 00:22:15.626473  884431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.626484  884431 out.go:374] Setting ErrFile to fd 2...
I1208 00:22:15.626490  884431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.626969  884431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:22:15.627614  884431 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.627740  884431 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.628268  884431 cli_runner.go:164] Run: docker container inspect functional-932121 --format={{.State.Status}}
I1208 00:22:15.659646  884431 ssh_runner.go:195] Run: systemctl --version
I1208 00:22:15.659702  884431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932121
I1208 00:22:15.693896  884431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-932121/id_rsa Username:docker}
I1208 00:22:15.823481  884431 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-932121 image ls --format json --alsologtostderr:
[{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ab3bd7310ba004a6221e62971b0d92cf8ea1c77a8c7be89dbbba101e42fb246f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-932121"],"size":"990"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e
0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager
:v1.34.2"],"size":"20718696"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-932121"],"size":"2173567"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19
e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b73
3363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-932121 image ls --format json --alsologtostderr:
I1208 00:22:15.602888  884426 out.go:360] Setting OutFile to fd 1 ...
I1208 00:22:15.603002  884426 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.603008  884426 out.go:374] Setting ErrFile to fd 2...
I1208 00:22:15.603013  884426 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.603276  884426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:22:15.603905  884426 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.604017  884426 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.604550  884426 cli_runner.go:164] Run: docker container inspect functional-932121 --format={{.State.Status}}
I1208 00:22:15.632688  884426 ssh_runner.go:195] Run: systemctl --version
I1208 00:22:15.632748  884426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932121
I1208 00:22:15.662097  884426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-932121/id_rsa Username:docker}
I1208 00:22:15.781599  884426 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-932121 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-932121
size: "2173567"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ab3bd7310ba004a6221e62971b0d92cf8ea1c77a8c7be89dbbba101e42fb246f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-932121
size: "990"
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-932121 image ls --format yaml --alsologtostderr:
I1208 00:22:15.317657  884350 out.go:360] Setting OutFile to fd 1 ...
I1208 00:22:15.317849  884350 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.317880  884350 out.go:374] Setting ErrFile to fd 2...
I1208 00:22:15.317901  884350 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:15.318254  884350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:22:15.319146  884350 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.319342  884350 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:15.319994  884350 cli_runner.go:164] Run: docker container inspect functional-932121 --format={{.State.Status}}
I1208 00:22:15.340660  884350 ssh_runner.go:195] Run: systemctl --version
I1208 00:22:15.340715  884350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932121
I1208 00:22:15.362699  884350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-932121/id_rsa Username:docker}
I1208 00:22:15.473278  884350 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-932121 ssh pgrep buildkitd: exit status 1 (324.92154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr: (3.176138473s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr:
I1208 00:22:16.196165  884560 out.go:360] Setting OutFile to fd 1 ...
I1208 00:22:16.196934  884560 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:16.196952  884560 out.go:374] Setting ErrFile to fd 2...
I1208 00:22:16.196959  884560 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:16.197342  884560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:22:16.198239  884560 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:16.201238  884560 config.go:182] Loaded profile config "functional-932121": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1208 00:22:16.201911  884560 cli_runner.go:164] Run: docker container inspect functional-932121 --format={{.State.Status}}
I1208 00:22:16.220561  884560 ssh_runner.go:195] Run: systemctl --version
I1208 00:22:16.220616  884560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-932121
I1208 00:22:16.238674  884560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-932121/id_rsa Username:docker}
I1208 00:22:16.345866  884560 build_images.go:162] Building image from path: /tmp/build.910824933.tar
I1208 00:22:16.345951  884560 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 00:22:16.354312  884560 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.910824933.tar
I1208 00:22:16.358123  884560 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.910824933.tar: stat -c "%s %y" /var/lib/minikube/build/build.910824933.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.910824933.tar': No such file or directory
I1208 00:22:16.358206  884560 ssh_runner.go:362] scp /tmp/build.910824933.tar --> /var/lib/minikube/build/build.910824933.tar (3072 bytes)
I1208 00:22:16.378388  884560 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.910824933
I1208 00:22:16.387450  884560 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.910824933 -xf /var/lib/minikube/build/build.910824933.tar
I1208 00:22:16.396093  884560 containerd.go:394] Building image: /var/lib/minikube/build/build.910824933
I1208 00:22:16.396166  884560 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.910824933 --local dockerfile=/var/lib/minikube/build/build.910824933 --output type=image,name=localhost/my-image:functional-932121
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 DONE 0.5s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4881d75fd31f86215059e5ccfa44aed48f54737ed754d9579b1fa69e43e66a6f
#8 exporting manifest sha256:4881d75fd31f86215059e5ccfa44aed48f54737ed754d9579b1fa69e43e66a6f 0.0s done
#8 exporting config sha256:c3e631d567716bbfe0986e6c2fd4c770ac5c0839aee36b7fdd143d3faa4a5d3f 0.0s done
#8 naming to localhost/my-image:functional-932121 done
#8 DONE 0.2s
I1208 00:22:19.293033  884560 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.910824933 --local dockerfile=/var/lib/minikube/build/build.910824933 --output type=image,name=localhost/my-image:functional-932121: (2.896827199s)
I1208 00:22:19.293125  884560 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.910824933
I1208 00:22:19.301289  884560 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.910824933.tar
I1208 00:22:19.308976  884560 build_images.go:218] Built localhost/my-image:functional-932121 from /tmp/build.910824933.tar
I1208 00:22:19.309014  884560 build_images.go:134] succeeded building to: functional-932121
I1208 00:22:19.309019  884560 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-932121
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr: (1.094170992s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr: (1.065331525s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-932121
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image save kicbase/echo-server:functional-932121 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image rm kicbase/echo-server:functional-932121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls
E1208 00:22:13.995464  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-932121
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-932121 image save --daemon kicbase/echo-server:functional-932121 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-932121
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-932121
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-932121
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-932121
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 cache add registry.k8s.io/pause:3.1: (1.185319981s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 cache add registry.k8s.io/pause:3.3: (1.175730379s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 cache add registry.k8s.io/pause:latest: (1.08606786s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1291739444/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cache add minikube-local-cache-test:functional-386544
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cache delete minikube-local-cache-test:functional-386544
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-386544
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.970226ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 cache reload: (1.135929897s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2089604168/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2089604168/001/logs.txt: (1.04753962s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 config get cpus: exit status 14 (64.787417ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 config get cpus: exit status 14 (74.096729ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (229.919376ms)

                                                
                                                
-- stdout --
	* [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:51:29.042555  913914 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:51:29.042716  913914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.042747  913914 out.go:374] Setting ErrFile to fd 2...
	I1208 00:51:29.042775  913914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:29.043094  913914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:51:29.043494  913914 out.go:368] Setting JSON to false
	I1208 00:51:29.044371  913914 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":20042,"bootTime":1765135047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:51:29.044490  913914 start.go:143] virtualization:  
	I1208 00:51:29.047893  913914 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 00:51:29.051130  913914 notify.go:221] Checking for updates...
	I1208 00:51:29.051896  913914 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:51:29.055007  913914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:51:29.058345  913914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:51:29.061383  913914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:51:29.064294  913914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:51:29.067197  913914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:51:29.070622  913914 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:51:29.071280  913914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:51:29.107698  913914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:51:29.107813  913914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:29.199815  913914 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:29.184964443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:29.199927  913914 docker.go:319] overlay module found
	I1208 00:51:29.203070  913914 out.go:179] * Using the docker driver based on existing profile
	I1208 00:51:29.205909  913914 start.go:309] selected driver: docker
	I1208 00:51:29.205928  913914 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:29.206043  913914 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:51:29.209630  913914 out.go:203] 
	W1208 00:51:29.212442  913914 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 00:51:29.215197  913914 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386544 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386544 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (194.373303ms)

                                                
                                                
-- stdout --
	* [functional-386544] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:51:28.847269  913868 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:51:28.847408  913868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:28.847419  913868 out.go:374] Setting ErrFile to fd 2...
	I1208 00:51:28.847424  913868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:51:28.847801  913868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:51:28.848273  913868 out.go:368] Setting JSON to false
	I1208 00:51:28.849108  913868 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":20042,"bootTime":1765135047,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 00:51:28.849188  913868 start.go:143] virtualization:  
	I1208 00:51:28.852639  913868 out.go:179] * [functional-386544] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1208 00:51:28.855609  913868 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 00:51:28.855683  913868 notify.go:221] Checking for updates...
	I1208 00:51:28.861389  913868 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 00:51:28.864359  913868 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 00:51:28.867285  913868 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 00:51:28.870274  913868 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 00:51:28.873289  913868 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 00:51:28.876756  913868 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 00:51:28.877399  913868 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 00:51:28.902170  913868 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 00:51:28.902319  913868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:51:28.968139  913868 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 00:51:28.955735872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:51:28.968253  913868 docker.go:319] overlay module found
	I1208 00:51:28.971325  913868 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1208 00:51:28.974221  913868 start.go:309] selected driver: docker
	I1208 00:51:28.974237  913868 start.go:927] validating driver "docker" against &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 00:51:28.974339  913868 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 00:51:28.977898  913868 out.go:203] 
	W1208 00:51:28.980833  913868 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 00:51:28.983712  913868 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh -n functional-386544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cp functional-386544:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2533455381/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh -n functional-386544 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh -n functional-386544 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/846711/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /etc/test/nested/copy/846711/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/846711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /etc/ssl/certs/846711.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/846711.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /usr/share/ca-certificates/846711.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8467112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /etc/ssl/certs/8467112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8467112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /usr/share/ca-certificates/8467112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh "sudo systemctl is-active docker": exit status 1 (285.028216ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh "sudo systemctl is-active crio": exit status 1 (286.154217ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-386544 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "347.132821ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.207454ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "349.633579ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "71.862831ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo222343014/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.597745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 00:51:22.919100  846711 retry.go:31] will retry after 268.847235ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo222343014/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh "sudo umount -f /mount-9p": exit status 1 (273.007591ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-386544 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo222343014/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-386544 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-386544 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3818501627/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386544 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-386544
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-386544
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386544 image ls --format short --alsologtostderr:
I1208 00:51:42.047779  916093 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:42.047901  916093 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:42.047913  916093 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:42.047919  916093 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:42.048190  916093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:42.048857  916093 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:42.048997  916093 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:42.049607  916093 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:42.072822  916093 ssh_runner.go:195] Run: systemctl --version
I1208 00:51:42.072933  916093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:42.093202  916093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:51:42.203997  916093 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386544 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-386544  │ sha256:ab3bd7 │ 990B   │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0     │ sha256:ccd634 │ 24.7MB │
│ docker.io/kicbase/echo-server               │ functional-386544  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1            │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0     │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0     │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ localhost/my-image                          │ functional-386544  │ sha256:d578e8 │ 831kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0     │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386544 image ls --format table --alsologtostderr:
I1208 00:51:46.415410  916489 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:46.415533  916489 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:46.415543  916489 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:46.415548  916489 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:46.415790  916489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:46.416391  916489 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:46.416517  916489 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:46.417015  916489 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:46.437809  916489 ssh_runner.go:195] Run: systemctl --version
I1208 00:51:46.437876  916489 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:46.455570  916489 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:51:46.561206  916489 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386544 image ls --format json --alsologtostderr:
[{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21168808"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"20661043"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba
37887b32ecc2f0f8657eb83a9aa660021c17fc083db6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15391364"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ab3bd7310ba004a6221e62971b0d92cf8ea1c77a8c7be89dbbba101e42fb246f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-386544"],"size":"990"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repo
Tags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22429671"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-386544"],"size":"2173567"},{"id":"sha256:d578e88641f22895256350e9a0edd01255442515ed97b7121568849cba4887d7","repoDigests":[],"repoTags":["localhost/my-image:functional-386544"],"size":"830614"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ccd634
d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24678359"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386544 image ls --format json --alsologtostderr:
I1208 00:51:46.173319  916448 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:46.173487  916448 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:46.173501  916448 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:46.173506  916448 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:46.173782  916448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:46.174422  916448 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:46.174627  916448 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:46.175202  916448 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:46.192559  916448 ssh_runner.go:195] Run: systemctl --version
I1208 00:51:46.192615  916448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:46.210717  916448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:51:46.317243  916448 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-386544 image ls --format yaml --alsologtostderr:
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-386544
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21168808"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24678359"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20661043"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ab3bd7310ba004a6221e62971b0d92cf8ea1c77a8c7be89dbbba101e42fb246f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-386544
size: "990"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22429671"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15391364"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386544 image ls --format yaml --alsologtostderr:
I1208 00:51:42.295586  916129 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:42.295719  916129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:42.295737  916129 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:42.295744  916129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:42.296009  916129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:42.296679  916129 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:42.296814  916129 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:42.297392  916129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:42.315695  916129 ssh_runner.go:195] Run: systemctl --version
I1208 00:51:42.315760  916129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:42.333628  916129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:51:42.441302  916129 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-386544 ssh pgrep buildkitd: exit status 1 (287.138012ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image build -t localhost/my-image:functional-386544 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-386544 image build -t localhost/my-image:functional-386544 testdata/build --alsologtostderr: (3.128068356s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-386544 image build -t localhost/my-image:functional-386544 testdata/build --alsologtostderr:
I1208 00:51:42.814042  916235 out.go:360] Setting OutFile to fd 1 ...
I1208 00:51:42.814228  916235 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:42.814259  916235 out.go:374] Setting ErrFile to fd 2...
I1208 00:51:42.814284  916235 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:51:42.814648  916235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:51:42.815336  916235 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:42.816098  916235 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:51:42.816723  916235 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:51:42.837010  916235 ssh_runner.go:195] Run: systemctl --version
I1208 00:51:42.837067  916235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:51:42.859159  916235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:51:42.965064  916235 build_images.go:162] Building image from path: /tmp/build.882650382.tar
I1208 00:51:42.965139  916235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 00:51:42.973200  916235 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.882650382.tar
I1208 00:51:42.977230  916235 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.882650382.tar: stat -c "%s %y" /var/lib/minikube/build/build.882650382.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.882650382.tar': No such file or directory
I1208 00:51:42.977266  916235 ssh_runner.go:362] scp /tmp/build.882650382.tar --> /var/lib/minikube/build/build.882650382.tar (3072 bytes)
I1208 00:51:42.995309  916235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.882650382
I1208 00:51:43.004883  916235 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.882650382 -xf /var/lib/minikube/build/build.882650382.tar
I1208 00:51:43.014832  916235 containerd.go:394] Building image: /var/lib/minikube/build/build.882650382
I1208 00:51:43.014915  916235 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.882650382 --local dockerfile=/var/lib/minikube/build/build.882650382 --output type=image,name=localhost/my-image:functional-386544
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:713aa5875af0129c25281457d8b60ba85d039b80ae00b2136d7e378fdfbe77d8 0.0s done
#8 exporting config sha256:d578e88641f22895256350e9a0edd01255442515ed97b7121568849cba4887d7 0.0s done
#8 naming to localhost/my-image:functional-386544 done
#8 DONE 0.2s
I1208 00:51:45.865447  916235 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.882650382 --local dockerfile=/var/lib/minikube/build/build.882650382 --output type=image,name=localhost/my-image:functional-386544: (2.850498571s)
I1208 00:51:45.865518  916235 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.882650382
I1208 00:51:45.873515  916235 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.882650382.tar
I1208 00:51:45.881218  916235 build_images.go:218] Built localhost/my-image:functional-386544 from /tmp/build.882650382.tar
I1208 00:51:45.881254  916235 build_images.go:134] succeeded building to: functional-386544
I1208 00:51:45.881259  916235 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-386544
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-386544
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image load --daemon kicbase/echo-server:functional-386544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image save kicbase/echo-server:functional-386544 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image rm kicbase/echo-server:functional-386544 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-386544
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 image save --daemon kicbase/echo-server:functional-386544 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-386544
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-386544 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-386544
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-386544
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-386544
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1208 00:54:25.032366  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.038740  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.050153  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.071709  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.113115  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.194488  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.355966  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:25.677630  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:26.319015  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:27.600298  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:30.127828  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:30.162161  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:35.284420  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:54:45.526266  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:06.011599  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:55:46.974588  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m56.629740021s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (177.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- rollout status deployment/busybox
E1208 00:56:28.221440  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 kubectl -- rollout status deployment/busybox: (4.499291185s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-fvx7f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-jkqgw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-q2lwf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-fvx7f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-jkqgw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-q2lwf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-fvx7f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-jkqgw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-q2lwf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-fvx7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-fvx7f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-jkqgw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-jkqgw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-q2lwf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 kubectl -- exec busybox-7b57f96db7-q2lwf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node add --alsologtostderr -v 5
E1208 00:57:08.895967  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 node add --alsologtostderr -v 5: (59.027624755s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5: (1.156884685s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-022174 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.11933546s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 status --output json --alsologtostderr -v 5: (1.11642627s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp testdata/cp-test.txt ha-022174:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4107019107/001/cp-test_ha-022174.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174:/home/docker/cp-test.txt ha-022174-m02:/home/docker/cp-test_ha-022174_ha-022174-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test_ha-022174_ha-022174-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174:/home/docker/cp-test.txt ha-022174-m03:/home/docker/cp-test_ha-022174_ha-022174-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test_ha-022174_ha-022174-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174:/home/docker/cp-test.txt ha-022174-m04:/home/docker/cp-test_ha-022174_ha-022174-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test_ha-022174_ha-022174-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp testdata/cp-test.txt ha-022174-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4107019107/001/cp-test_ha-022174-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m02:/home/docker/cp-test.txt ha-022174:/home/docker/cp-test_ha-022174-m02_ha-022174.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test_ha-022174-m02_ha-022174.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m02:/home/docker/cp-test.txt ha-022174-m03:/home/docker/cp-test_ha-022174-m02_ha-022174-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test_ha-022174-m02_ha-022174-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m02:/home/docker/cp-test.txt ha-022174-m04:/home/docker/cp-test_ha-022174-m02_ha-022174-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test_ha-022174-m02_ha-022174-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp testdata/cp-test.txt ha-022174-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4107019107/001/cp-test_ha-022174-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m03:/home/docker/cp-test.txt ha-022174:/home/docker/cp-test_ha-022174-m03_ha-022174.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test_ha-022174-m03_ha-022174.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m03:/home/docker/cp-test.txt ha-022174-m02:/home/docker/cp-test_ha-022174-m03_ha-022174-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test_ha-022174-m03_ha-022174-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m03:/home/docker/cp-test.txt ha-022174-m04:/home/docker/cp-test_ha-022174-m03_ha-022174-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test_ha-022174-m03_ha-022174-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp testdata/cp-test.txt ha-022174-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4107019107/001/cp-test_ha-022174-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m04:/home/docker/cp-test.txt ha-022174:/home/docker/cp-test_ha-022174-m04_ha-022174.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174 "sudo cat /home/docker/cp-test_ha-022174-m04_ha-022174.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m04:/home/docker/cp-test.txt ha-022174-m02:/home/docker/cp-test_ha-022174-m04_ha-022174-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m02 "sudo cat /home/docker/cp-test_ha-022174-m04_ha-022174-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 cp ha-022174-m04:/home/docker/cp-test.txt ha-022174-m03:/home/docker/cp-test_ha-022174-m04_ha-022174-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 ssh -n ha-022174-m03 "sudo cat /home/docker/cp-test_ha-022174-m04_ha-022174-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 node stop m02 --alsologtostderr -v 5: (12.475243975s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5: exit status 7 (852.7621ms)

                                                
                                                
-- stdout --
	ha-022174
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-022174-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022174-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-022174-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 00:58:11.156049  933983 out.go:360] Setting OutFile to fd 1 ...
	I1208 00:58:11.156196  933983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:58:11.156202  933983 out.go:374] Setting ErrFile to fd 2...
	I1208 00:58:11.156207  933983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 00:58:11.156588  933983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 00:58:11.156817  933983 out.go:368] Setting JSON to false
	I1208 00:58:11.156839  933983 mustload.go:66] Loading cluster: ha-022174
	I1208 00:58:11.157643  933983 config.go:182] Loaded profile config "ha-022174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 00:58:11.157670  933983 status.go:174] checking status of ha-022174 ...
	I1208 00:58:11.159025  933983 cli_runner.go:164] Run: docker container inspect ha-022174 --format={{.State.Status}}
	I1208 00:58:11.161126  933983 notify.go:221] Checking for updates...
	I1208 00:58:11.185644  933983 status.go:371] ha-022174 host status = "Running" (err=<nil>)
	I1208 00:58:11.185671  933983 host.go:66] Checking if "ha-022174" exists ...
	I1208 00:58:11.186039  933983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-022174
	I1208 00:58:11.212105  933983 host.go:66] Checking if "ha-022174" exists ...
	I1208 00:58:11.212454  933983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:58:11.212503  933983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-022174
	I1208 00:58:11.231528  933983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33563 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/ha-022174/id_rsa Username:docker}
	I1208 00:58:11.341719  933983 ssh_runner.go:195] Run: systemctl --version
	I1208 00:58:11.348969  933983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:58:11.372956  933983 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 00:58:11.463881  933983 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-08 00:58:11.452307329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 00:58:11.464519  933983 kubeconfig.go:125] found "ha-022174" server: "https://192.168.49.254:8443"
	I1208 00:58:11.464558  933983 api_server.go:166] Checking apiserver status ...
	I1208 00:58:11.464620  933983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:58:11.478121  933983 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	I1208 00:58:11.487006  933983 api_server.go:182] apiserver freezer: "7:freezer:/docker/baf75734d6cc69dc80fb0fecef8baace8cd380c1aa9e07f0044cab37e948bfcd/kubepods/burstable/podc526e4ac30356e3384ec864ab144b4aa/91bfff98715f723b102fa3d55745902c08fc5e0f469ffabe84550028931c09be"
	I1208 00:58:11.487076  933983 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/baf75734d6cc69dc80fb0fecef8baace8cd380c1aa9e07f0044cab37e948bfcd/kubepods/burstable/podc526e4ac30356e3384ec864ab144b4aa/91bfff98715f723b102fa3d55745902c08fc5e0f469ffabe84550028931c09be/freezer.state
	I1208 00:58:11.494895  933983 api_server.go:204] freezer state: "THAWED"
	I1208 00:58:11.494927  933983 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1208 00:58:11.505064  933983 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1208 00:58:11.505098  933983 status.go:463] ha-022174 apiserver status = Running (err=<nil>)
	I1208 00:58:11.505109  933983 status.go:176] ha-022174 status: &{Name:ha-022174 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 00:58:11.505128  933983 status.go:174] checking status of ha-022174-m02 ...
	I1208 00:58:11.505468  933983 cli_runner.go:164] Run: docker container inspect ha-022174-m02 --format={{.State.Status}}
	I1208 00:58:11.525837  933983 status.go:371] ha-022174-m02 host status = "Stopped" (err=<nil>)
	I1208 00:58:11.525860  933983 status.go:384] host is not running, skipping remaining checks
	I1208 00:58:11.525868  933983 status.go:176] ha-022174-m02 status: &{Name:ha-022174-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 00:58:11.525889  933983 status.go:174] checking status of ha-022174-m03 ...
	I1208 00:58:11.526237  933983 cli_runner.go:164] Run: docker container inspect ha-022174-m03 --format={{.State.Status}}
	I1208 00:58:11.545812  933983 status.go:371] ha-022174-m03 host status = "Running" (err=<nil>)
	I1208 00:58:11.545837  933983 host.go:66] Checking if "ha-022174-m03" exists ...
	I1208 00:58:11.546169  933983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-022174-m03
	I1208 00:58:11.567896  933983 host.go:66] Checking if "ha-022174-m03" exists ...
	I1208 00:58:11.568218  933983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:58:11.568268  933983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-022174-m03
	I1208 00:58:11.586990  933983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33573 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/ha-022174-m03/id_rsa Username:docker}
	I1208 00:58:11.696034  933983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:58:11.712193  933983 kubeconfig.go:125] found "ha-022174" server: "https://192.168.49.254:8443"
	I1208 00:58:11.712224  933983 api_server.go:166] Checking apiserver status ...
	I1208 00:58:11.712269  933983 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 00:58:11.725291  933983 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1337/cgroup
	I1208 00:58:11.734335  933983 api_server.go:182] apiserver freezer: "7:freezer:/docker/ba60dd2e7245008803434bc00e646a9c206f19ee0506aa8b809b460491ae2fc2/kubepods/burstable/pod7814a7f6c604d60e29bcbd05dfe616e6/4ed4c6d22cf7509fa77e8e375fbd3fac570584779adb5e3391f1e04fc5779654"
	I1208 00:58:11.734417  933983 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ba60dd2e7245008803434bc00e646a9c206f19ee0506aa8b809b460491ae2fc2/kubepods/burstable/pod7814a7f6c604d60e29bcbd05dfe616e6/4ed4c6d22cf7509fa77e8e375fbd3fac570584779adb5e3391f1e04fc5779654/freezer.state
	I1208 00:58:11.750995  933983 api_server.go:204] freezer state: "THAWED"
	I1208 00:58:11.751025  933983 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1208 00:58:11.759248  933983 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1208 00:58:11.759285  933983 status.go:463] ha-022174-m03 apiserver status = Running (err=<nil>)
	I1208 00:58:11.759305  933983 status.go:176] ha-022174-m03 status: &{Name:ha-022174-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 00:58:11.759327  933983 status.go:174] checking status of ha-022174-m04 ...
	I1208 00:58:11.759673  933983 cli_runner.go:164] Run: docker container inspect ha-022174-m04 --format={{.State.Status}}
	I1208 00:58:11.777802  933983 status.go:371] ha-022174-m04 host status = "Running" (err=<nil>)
	I1208 00:58:11.777825  933983 host.go:66] Checking if "ha-022174-m04" exists ...
	I1208 00:58:11.778146  933983 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-022174-m04
	I1208 00:58:11.800256  933983 host.go:66] Checking if "ha-022174-m04" exists ...
	I1208 00:58:11.800566  933983 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 00:58:11.800616  933983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-022174-m04
	I1208 00:58:11.820458  933983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33578 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/ha-022174-m04/id_rsa Username:docker}
	I1208 00:58:11.932005  933983 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 00:58:11.946412  933983 status.go:176] ha-022174-m04 status: &{Name:ha-022174-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 node start m02 --alsologtostderr -v 5: (13.32168521s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5: (1.391744309s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.22824309s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 stop --alsologtostderr -v 5: (37.812853714s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 start --wait true --alsologtostderr -v 5
E1208 00:59:25.031279  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:59:30.128294  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:59:31.296372  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:59:52.737553  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 start --wait true --alsologtostderr -v 5: (1m0.088128078s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 node delete m03 --alsologtostderr -v 5: (10.045922824s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 stop --alsologtostderr -v 5: (36.364171829s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5: exit status 7 (124.533164ms)

                                                
                                                
-- stdout --
	ha-022174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022174-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-022174-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:00:55.268240  948889 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:00:55.268461  948889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:00:55.268490  948889 out.go:374] Setting ErrFile to fd 2...
	I1208 01:00:55.268510  948889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:00:55.268800  948889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:00:55.269043  948889 out.go:368] Setting JSON to false
	I1208 01:00:55.269108  948889 mustload.go:66] Loading cluster: ha-022174
	I1208 01:00:55.269149  948889 notify.go:221] Checking for updates...
	I1208 01:00:55.269621  948889 config.go:182] Loaded profile config "ha-022174": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:00:55.269663  948889 status.go:174] checking status of ha-022174 ...
	I1208 01:00:55.270246  948889 cli_runner.go:164] Run: docker container inspect ha-022174 --format={{.State.Status}}
	I1208 01:00:55.289277  948889 status.go:371] ha-022174 host status = "Stopped" (err=<nil>)
	I1208 01:00:55.289309  948889 status.go:384] host is not running, skipping remaining checks
	I1208 01:00:55.289317  948889 status.go:176] ha-022174 status: &{Name:ha-022174 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:00:55.289349  948889 status.go:174] checking status of ha-022174-m02 ...
	I1208 01:00:55.289753  948889 cli_runner.go:164] Run: docker container inspect ha-022174-m02 --format={{.State.Status}}
	I1208 01:00:55.314291  948889 status.go:371] ha-022174-m02 host status = "Stopped" (err=<nil>)
	I1208 01:00:55.314330  948889 status.go:384] host is not running, skipping remaining checks
	I1208 01:00:55.314348  948889 status.go:176] ha-022174-m02 status: &{Name:ha-022174-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:00:55.314376  948889 status.go:174] checking status of ha-022174-m04 ...
	I1208 01:00:55.314880  948889 cli_runner.go:164] Run: docker container inspect ha-022174-m04 --format={{.State.Status}}
	I1208 01:00:55.336560  948889 status.go:371] ha-022174-m04 host status = "Stopped" (err=<nil>)
	I1208 01:00:55.336580  948889 status.go:384] host is not running, skipping remaining checks
	I1208 01:00:55.336588  948889 status.go:176] ha-022174-m04 status: &{Name:ha-022174-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (61.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1208 01:01:28.221825  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m0.918064009s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (61.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (101.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 node add --control-plane --alsologtostderr -v 5: (1m40.54676677s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-022174 status --alsologtostderr -v 5: (1.077411131s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (101.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.186748269s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.19s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-199927 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1208 01:04:25.030671  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:04:30.127693  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-199927 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m21.838795431s)
--- PASS: TestJSONOutput/start/Command (81.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-199927 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-199927 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.03s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-199927 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-199927 --output=json --user=testUser: (6.026280907s)
--- PASS: TestJSONOutput/stop/Command (6.03s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-222190 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-222190 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.651337ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"beba846c-0f8c-4ee8-a9c2-2a7dc59a8404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-222190] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2557dc0c-3a11-457d-ad68-93ca3f105aa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"fb350487-f452-4b01-957b-675220cdc645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03edf505-af9d-4dc9-9846-e6c1b2422546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig"}}
	{"specversion":"1.0","id":"6ddae64f-1ae0-43b0-b70c-c2e110708ce6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube"}}
	{"specversion":"1.0","id":"df302902-b396-4fdc-b534-49f81bb38231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3f92cb6c-0be6-42ff-ada8-382c7fc3c771","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aeaf59b0-f3d9-4313-b785-8a82fcb483ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-222190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-222190
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (61.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-460788 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-460788 --network=: (59.106091135s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-460788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-460788
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-460788: (2.229528642s)
--- PASS: TestKicCustomNetwork/create_custom_network (61.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-307312 --network=bridge
E1208 01:06:28.221235  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-307312 --network=bridge: (31.619337128s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-307312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-307312
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-307312: (2.115097191s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.76s)

                                                
                                    
x
+
TestKicExistingNetwork (34.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1208 01:06:58.852024  846711 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1208 01:06:58.867526  846711 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1208 01:06:58.867609  846711 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1208 01:06:58.867627  846711 cli_runner.go:164] Run: docker network inspect existing-network
W1208 01:06:58.883533  846711 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1208 01:06:58.883568  846711 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1208 01:06:58.883582  846711 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1208 01:06:58.883709  846711 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1208 01:06:58.904219  846711 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85044198c848 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e2:67:15:e5:e5:9f} reservation:<nil>}
I1208 01:06:58.904639  846711 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017876b0}
I1208 01:06:58.904676  846711 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1208 01:06:58.904727  846711 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1208 01:06:58.972650  846711 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-682377 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-682377 --network=existing-network: (31.888596766s)
helpers_test.go:175: Cleaning up "existing-network-682377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-682377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-682377: (2.129446139s)
I1208 01:07:33.007795  846711 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.18s)

                                                
                                    
x
+
TestKicCustomSubnet (37.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-847050 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-847050 --subnet=192.168.60.0/24: (35.510663052s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-847050 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-847050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-847050
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-847050: (2.244752344s)
--- PASS: TestKicCustomSubnet (37.79s)

                                                
                                    
x
+
TestKicStaticIP (36.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-942510 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-942510 --static-ip=192.168.200.200: (34.576579121s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-942510 ip
helpers_test.go:175: Cleaning up "static-ip-942510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-942510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-942510: (2.216221697s)
--- PASS: TestKicStaticIP (36.97s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-644847 --driver=docker  --container-runtime=containerd
E1208 01:09:13.206597  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-644847 --driver=docker  --container-runtime=containerd: (33.0730916s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-647686 --driver=docker  --container-runtime=containerd
E1208 01:09:25.033776  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:09:30.128502  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-647686 --driver=docker  --container-runtime=containerd: (33.711918906s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-644847
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-647686
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-647686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-647686
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-647686: (2.151399857s)
helpers_test.go:175: Cleaning up "first-644847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-644847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-644847: (2.497293329s)
--- PASS: TestMinikubeProfile (73.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-674515 --memory=3072 --mount-string /tmp/TestMountStartserial865590669/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-674515 --memory=3072 --mount-string /tmp/TestMountStartserial865590669/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.492845213s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-674515 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-676656 --memory=3072 --mount-string /tmp/TestMountStartserial865590669/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-676656 --memory=3072 --mount-string /tmp/TestMountStartserial865590669/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.594395039s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-676656 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-674515 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-674515 --alsologtostderr -v=5: (1.750992946s)
--- PASS: TestMountStart/serial/DeleteFirst (1.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-676656 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-676656
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-676656: (1.293262018s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-676656
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-676656: (6.70898838s)
--- PASS: TestMountStart/serial/RestartStopped (7.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-676656 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072769 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1208 01:10:48.098862  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:11:28.221853  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072769 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m49.769485409s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-072769 -- rollout status deployment/busybox: (3.522893612s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-7qpwq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-hn2dx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-7qpwq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-hn2dx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-7qpwq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-hn2dx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-7qpwq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-7qpwq -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-hn2dx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-072769 -- exec busybox-7b57f96db7-hn2dx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-072769 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-072769 -v=5 --alsologtostderr: (55.73098278s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-072769 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp testdata/cp-test.txt multinode-072769:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3481085356/001/cp-test_multinode-072769.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769:/home/docker/cp-test.txt multinode-072769-m02:/home/docker/cp-test_multinode-072769_multinode-072769-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m02 "sudo cat /home/docker/cp-test_multinode-072769_multinode-072769-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769:/home/docker/cp-test.txt multinode-072769-m03:/home/docker/cp-test_multinode-072769_multinode-072769-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m03 "sudo cat /home/docker/cp-test_multinode-072769_multinode-072769-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp testdata/cp-test.txt multinode-072769-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3481085356/001/cp-test_multinode-072769-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769-m02:/home/docker/cp-test.txt multinode-072769:/home/docker/cp-test_multinode-072769-m02_multinode-072769.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769 "sudo cat /home/docker/cp-test_multinode-072769-m02_multinode-072769.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769-m02:/home/docker/cp-test.txt multinode-072769-m03:/home/docker/cp-test_multinode-072769-m02_multinode-072769-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m03 "sudo cat /home/docker/cp-test_multinode-072769-m02_multinode-072769-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp testdata/cp-test.txt multinode-072769-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3481085356/001/cp-test_multinode-072769-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769-m03:/home/docker/cp-test.txt multinode-072769:/home/docker/cp-test_multinode-072769-m03_multinode-072769.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769 "sudo cat /home/docker/cp-test_multinode-072769-m03_multinode-072769.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 cp multinode-072769-m03:/home/docker/cp-test.txt multinode-072769-m02:/home/docker/cp-test_multinode-072769-m03_multinode-072769-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 ssh -n multinode-072769-m02 "sudo cat /home/docker/cp-test_multinode-072769-m03_multinode-072769-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-072769 node stop m03: (1.310094134s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072769 status: exit status 7 (584.14267ms)

                                                
                                                
-- stdout --
	multinode-072769
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-072769-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-072769-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr: exit status 7 (573.80465ms)

                                                
                                                
-- stdout --
	multinode-072769
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-072769-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-072769-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:13:38.598268 1002426 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:13:38.598603 1002426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:13:38.598624 1002426 out.go:374] Setting ErrFile to fd 2...
	I1208 01:13:38.598631 1002426 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:13:38.598996 1002426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:13:38.599242 1002426 out.go:368] Setting JSON to false
	I1208 01:13:38.599271 1002426 mustload.go:66] Loading cluster: multinode-072769
	I1208 01:13:38.599955 1002426 config.go:182] Loaded profile config "multinode-072769": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:13:38.599974 1002426 status.go:174] checking status of multinode-072769 ...
	I1208 01:13:38.600714 1002426 cli_runner.go:164] Run: docker container inspect multinode-072769 --format={{.State.Status}}
	I1208 01:13:38.602522 1002426 notify.go:221] Checking for updates...
	I1208 01:13:38.620183 1002426 status.go:371] multinode-072769 host status = "Running" (err=<nil>)
	I1208 01:13:38.620206 1002426 host.go:66] Checking if "multinode-072769" exists ...
	I1208 01:13:38.620529 1002426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-072769
	I1208 01:13:38.647653 1002426 host.go:66] Checking if "multinode-072769" exists ...
	I1208 01:13:38.648140 1002426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:13:38.648241 1002426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-072769
	I1208 01:13:38.673755 1002426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33683 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/multinode-072769/id_rsa Username:docker}
	I1208 01:13:38.784243 1002426 ssh_runner.go:195] Run: systemctl --version
	I1208 01:13:38.791010 1002426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:13:38.804986 1002426 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:13:38.867455 1002426 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-08 01:13:38.856691238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:13:38.868023 1002426 kubeconfig.go:125] found "multinode-072769" server: "https://192.168.67.2:8443"
	I1208 01:13:38.868065 1002426 api_server.go:166] Checking apiserver status ...
	I1208 01:13:38.868116 1002426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 01:13:38.881054 1002426 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1429/cgroup
	I1208 01:13:38.890518 1002426 api_server.go:182] apiserver freezer: "7:freezer:/docker/79e5e1c550a52592d8a83d16b8af45d8cb04b80a6aa209eb6a7379bfb7e2f6df/kubepods/burstable/pod2c10c71de5e4087c0618d9c51e9b48ef/accc1567b7fc02fb7fd3de4ebf21c09125e0e5f6df9f000aeffaa57da9313b1b"
	I1208 01:13:38.890602 1002426 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/79e5e1c550a52592d8a83d16b8af45d8cb04b80a6aa209eb6a7379bfb7e2f6df/kubepods/burstable/pod2c10c71de5e4087c0618d9c51e9b48ef/accc1567b7fc02fb7fd3de4ebf21c09125e0e5f6df9f000aeffaa57da9313b1b/freezer.state
	I1208 01:13:38.898901 1002426 api_server.go:204] freezer state: "THAWED"
	I1208 01:13:38.898928 1002426 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1208 01:13:38.907575 1002426 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1208 01:13:38.907605 1002426 status.go:463] multinode-072769 apiserver status = Running (err=<nil>)
	I1208 01:13:38.907617 1002426 status.go:176] multinode-072769 status: &{Name:multinode-072769 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:13:38.907660 1002426 status.go:174] checking status of multinode-072769-m02 ...
	I1208 01:13:38.907994 1002426 cli_runner.go:164] Run: docker container inspect multinode-072769-m02 --format={{.State.Status}}
	I1208 01:13:38.931339 1002426 status.go:371] multinode-072769-m02 host status = "Running" (err=<nil>)
	I1208 01:13:38.931367 1002426 host.go:66] Checking if "multinode-072769-m02" exists ...
	I1208 01:13:38.931712 1002426 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-072769-m02
	I1208 01:13:38.950511 1002426 host.go:66] Checking if "multinode-072769-m02" exists ...
	I1208 01:13:38.950833 1002426 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 01:13:38.950897 1002426 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-072769-m02
	I1208 01:13:38.969297 1002426 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33688 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/multinode-072769-m02/id_rsa Username:docker}
	I1208 01:13:39.076700 1002426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 01:13:39.090747 1002426 status.go:176] multinode-072769-m02 status: &{Name:multinode-072769-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:13:39.090817 1002426 status.go:174] checking status of multinode-072769-m03 ...
	I1208 01:13:39.091253 1002426 cli_runner.go:164] Run: docker container inspect multinode-072769-m03 --format={{.State.Status}}
	I1208 01:13:39.109769 1002426 status.go:371] multinode-072769-m03 host status = "Stopped" (err=<nil>)
	I1208 01:13:39.109800 1002426 status.go:384] host is not running, skipping remaining checks
	I1208 01:13:39.109808 1002426 status.go:176] multinode-072769-m03 status: &{Name:multinode-072769-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-072769 node start m03 -v=5 --alsologtostderr: (7.230994421s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-072769
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-072769
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-072769: (25.420805299s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072769 --wait=true -v=5 --alsologtostderr
E1208 01:14:25.031497  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:14:30.128700  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072769 --wait=true -v=5 --alsologtostderr: (49.384026166s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-072769
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-072769 node delete m03: (5.032043121s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-072769 stop: (24.033441226s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072769 status: exit status 7 (100.893042ms)

                                                
                                                
-- stdout --
	multinode-072769
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-072769-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr: exit status 7 (105.565456ms)

                                                
                                                
-- stdout --
	multinode-072769
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-072769-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:15:32.107818 1011229 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:15:32.108035 1011229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:15:32.108063 1011229 out.go:374] Setting ErrFile to fd 2...
	I1208 01:15:32.108082 1011229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:15:32.108386 1011229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:15:32.108618 1011229 out.go:368] Setting JSON to false
	I1208 01:15:32.108673 1011229 mustload.go:66] Loading cluster: multinode-072769
	I1208 01:15:32.108761 1011229 notify.go:221] Checking for updates...
	I1208 01:15:32.110240 1011229 config.go:182] Loaded profile config "multinode-072769": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:15:32.110300 1011229 status.go:174] checking status of multinode-072769 ...
	I1208 01:15:32.112863 1011229 cli_runner.go:164] Run: docker container inspect multinode-072769 --format={{.State.Status}}
	I1208 01:15:32.130516 1011229 status.go:371] multinode-072769 host status = "Stopped" (err=<nil>)
	I1208 01:15:32.130540 1011229 status.go:384] host is not running, skipping remaining checks
	I1208 01:15:32.130547 1011229 status.go:176] multinode-072769 status: &{Name:multinode-072769 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 01:15:32.130574 1011229 status.go:174] checking status of multinode-072769-m02 ...
	I1208 01:15:32.130901 1011229 cli_runner.go:164] Run: docker container inspect multinode-072769-m02 --format={{.State.Status}}
	I1208 01:15:32.159470 1011229 status.go:371] multinode-072769-m02 host status = "Stopped" (err=<nil>)
	I1208 01:15:32.159500 1011229 status.go:384] host is not running, skipping remaining checks
	I1208 01:15:32.159517 1011229 status.go:176] multinode-072769-m02 status: &{Name:multinode-072769-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072769 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1208 01:16:11.298405  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072769 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.535592396s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-072769 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-072769
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072769-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-072769-m02 --driver=docker  --container-runtime=containerd: exit status 14 (118.180398ms)

                                                
                                                
-- stdout --
	* [multinode-072769-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-072769-m02' is duplicated with machine name 'multinode-072769-m02' in profile 'multinode-072769'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-072769-m03 --driver=docker  --container-runtime=containerd
E1208 01:16:28.221052  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-072769-m03 --driver=docker  --container-runtime=containerd: (34.492313153s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-072769
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-072769: exit status 80 (339.498825ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-072769 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-072769-m03 already exists in multinode-072769-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-072769-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-072769-m03: (2.065777379s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.07s)

                                                
                                    
x
+
TestPreload (119.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:45: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-708708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:45: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-708708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (55.421172738s)
preload_test.go:53: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-708708 image pull gcr.io/k8s-minikube/busybox
preload_test.go:53: (dbg) Done: out/minikube-linux-arm64 -p test-preload-708708 image pull gcr.io/k8s-minikube/busybox: (2.443188395s)
preload_test.go:59: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-708708
preload_test.go:59: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-708708: (6.008880174s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-708708 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:67: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-708708 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (52.650657349s)
preload_test.go:72: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-708708 image list
helpers_test.go:175: Cleaning up "test-preload-708708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-708708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-708708: (2.858137304s)
--- PASS: TestPreload (119.62s)

                                                
                                    
x
+
TestScheduledStopUnix (107.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-758368 --memory=3072 --driver=docker  --container-runtime=containerd
E1208 01:19:25.033526  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:19:30.128520  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-758368 --memory=3072 --driver=docker  --container-runtime=containerd: (30.717048145s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-758368 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 01:19:33.631766 1027171 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:19:33.631904 1027171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:19:33.631913 1027171 out.go:374] Setting ErrFile to fd 2...
	I1208 01:19:33.631919 1027171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:19:33.632195 1027171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:19:33.632461 1027171 out.go:368] Setting JSON to false
	I1208 01:19:33.632578 1027171 mustload.go:66] Loading cluster: scheduled-stop-758368
	I1208 01:19:33.632972 1027171 config.go:182] Loaded profile config "scheduled-stop-758368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:19:33.633047 1027171 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/config.json ...
	I1208 01:19:33.633233 1027171 mustload.go:66] Loading cluster: scheduled-stop-758368
	I1208 01:19:33.633356 1027171 config.go:182] Loaded profile config "scheduled-stop-758368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-758368 -n scheduled-stop-758368
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-758368 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 01:19:34.125122 1027261 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:19:34.125348 1027261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:19:34.125372 1027261 out.go:374] Setting ErrFile to fd 2...
	I1208 01:19:34.125400 1027261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:19:34.125692 1027261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:19:34.125996 1027261 out.go:368] Setting JSON to false
	I1208 01:19:34.126235 1027261 daemonize_unix.go:73] killing process 1027188 as it is an old scheduled stop
	I1208 01:19:34.126422 1027261 mustload.go:66] Loading cluster: scheduled-stop-758368
	I1208 01:19:34.126866 1027261 config.go:182] Loaded profile config "scheduled-stop-758368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:19:34.126975 1027261 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/config.json ...
	I1208 01:19:34.127180 1027261 mustload.go:66] Loading cluster: scheduled-stop-758368
	I1208 01:19:34.127370 1027261 config.go:182] Loaded profile config "scheduled-stop-758368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1208 01:19:34.136563  846711 retry.go:31] will retry after 90.301µs: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.137713  846711 retry.go:31] will retry after 200.062µs: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.138848  846711 retry.go:31] will retry after 285.462µs: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.139983  846711 retry.go:31] will retry after 246.804µs: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.141093  846711 retry.go:31] will retry after 427.137µs: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.142256  846711 retry.go:31] will retry after 1.010352ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.143387  846711 retry.go:31] will retry after 1.106241ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.145565  846711 retry.go:31] will retry after 2.466639ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.148761  846711 retry.go:31] will retry after 3.459748ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.152981  846711 retry.go:31] will retry after 5.350406ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.159455  846711 retry.go:31] will retry after 8.103373ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.169231  846711 retry.go:31] will retry after 6.940814ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.176824  846711 retry.go:31] will retry after 9.334709ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.187084  846711 retry.go:31] will retry after 28.686603ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
I1208 01:19:34.216593  846711 retry.go:31] will retry after 25.19152ms: open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-758368 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-758368 -n scheduled-stop-758368
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-758368
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-758368 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 01:20:00.387804 1027951 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:20:00.387932 1027951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:00.387939 1027951 out.go:374] Setting ErrFile to fd 2...
	I1208 01:20:00.387944 1027951 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:20:00.388444 1027951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:20:00.388811 1027951 out.go:368] Setting JSON to false
	I1208 01:20:00.388934 1027951 mustload.go:66] Loading cluster: scheduled-stop-758368
	I1208 01:20:00.389579 1027951 config.go:182] Loaded profile config "scheduled-stop-758368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1208 01:20:00.399461 1027951 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/scheduled-stop-758368/config.json ...
	I1208 01:20:00.399760 1027951 mustload.go:66] Loading cluster: scheduled-stop-758368
	I1208 01:20:00.399975 1027951 config.go:182] Loaded profile config "scheduled-stop-758368": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-758368
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-758368: exit status 7 (73.833457ms)

                                                
                                                
-- stdout --
	scheduled-stop-758368
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-758368 -n scheduled-stop-758368
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-758368 -n scheduled-stop-758368: exit status 7 (71.079953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-758368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-758368
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-758368: (4.994083239s)
--- PASS: TestScheduledStopUnix (107.70s)

                                                
                                    
x
+
TestInsufficientStorage (12.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-858455 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-858455 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.320107085s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4288de72-0ab3-4534-afb1-f37a88f3dd52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-858455] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf68c1e1-9d90-4b2d-ad34-ccbdc51a33d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22054"}}
	{"specversion":"1.0","id":"0dcb39b4-4923-4002-a93d-85bc59c77d9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5fbbac7d-bfc4-461a-a193-e68df6037069","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig"}}
	{"specversion":"1.0","id":"d86be9b0-e125-49d5-b763-f0e86d85273d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube"}}
	{"specversion":"1.0","id":"a38fae89-e910-4308-8793-9003cae16500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"33320a7a-ad6c-4070-8483-09c47722caac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37da60af-988d-456b-b5a1-ae23700e8d92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"46659044-709b-4381-83ac-67ddc42e61d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4fe4e5e2-b494-4aa4-813d-1aff28488b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"abc597d2-34ba-4db3-9325-c025c7ba3f4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7e2e0952-8bb3-4491-abb0-b47c2e1b4922","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-858455\" primary control-plane node in \"insufficient-storage-858455\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bd6fd29-f592-4774-af58-2f055320bf2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3213d661-e3b9-4d52-a71f-372fb7ce8018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9002c087-ef05-4698-9a06-5597aa2c715d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-858455 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-858455 --output=json --layout=cluster: exit status 7 (315.644297ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-858455","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-858455","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:21:01.178024 1029792 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-858455" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-858455 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-858455 --output=json --layout=cluster: exit status 7 (325.292825ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-858455","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-858455","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1208 01:21:01.505969 1029861 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-858455" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
	E1208 01:21:01.517016 1029861 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/insufficient-storage-858455/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-858455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-858455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-858455: (1.987166059s)
--- PASS: TestInsufficientStorage (12.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2304802705 start -p running-upgrade-499586 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1208 01:29:25.031268  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:29:30.127991  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2304802705 start -p running-upgrade-499586 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.824110969s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-499586 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-499586 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.989954835s)
helpers_test.go:175: Cleaning up "running-upgrade-499586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-499586
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-499586: (2.78440279s)
--- PASS: TestRunningBinaryUpgrade (73.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3415496282 start -p missing-upgrade-343850 --memory=3072 --driver=docker  --container-runtime=containerd
E1208 01:21:28.221195  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3415496282 start -p missing-upgrade-343850 --memory=3072 --driver=docker  --container-runtime=containerd: (1m15.296522608s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-343850
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-343850
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-343850 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-343850 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m21.906464563s)
helpers_test.go:175: Cleaning up "missing-upgrade-343850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-343850
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-343850: (2.180017511s)
--- PASS: TestMissingContainerUpgrade (171.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-542982 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-542982 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (102.050216ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-542982] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-542982 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-542982 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.640326929s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-542982 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-542982 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-542982 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (6.070895472s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-542982 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-542982 status -o json: exit status 2 (443.932841ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-542982","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-542982
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-542982: (4.566517539s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-542982 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-542982 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.132022328s)
--- PASS: TestNoKubernetes/serial/Start (9.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-542982 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-542982 "sudo systemctl is-active --quiet service kubelet": exit status 1 (356.313682ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-542982
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-542982: (1.431541428s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-542982 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-542982 --driver=docker  --container-runtime=containerd: (7.201378794s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-542982 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-542982 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.309789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (11.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (11.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (302.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4048361806 start -p stopped-upgrade-306279 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1208 01:24:25.030603  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:24:30.127825  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4048361806 start -p stopped-upgrade-306279 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.011735186s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4048361806 -p stopped-upgrade-306279 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4048361806 -p stopped-upgrade-306279 stop: (1.284858759s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-306279 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1208 01:25:53.208540  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:26:28.221883  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:27:28.101394  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-306279 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m26.900454334s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (302.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-306279
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-306279: (2.037965066s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.04s)

                                                
                                    
x
+
TestPause/serial/Start (79.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-871928 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1208 01:31:28.221275  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-871928 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m19.200330152s)
--- PASS: TestPause/serial/Start (79.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-871928 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-871928 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.348930498s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.36s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-871928 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-871928 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-871928 --output=json --layout=cluster: exit status 2 (335.03298ms)

                                                
                                                
-- stdout --
	{"Name":"pause-871928","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-871928","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-871928 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-871928 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.88s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-871928 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-871928 --alsologtostderr -v=5: (2.884886843s)
--- PASS: TestPause/serial/DeletePaused (2.88s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-871928
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-871928: exit status 1 (17.880134ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-871928: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-475514 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-475514 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (216.45615ms)

                                                
                                                
-- stdout --
	* [false-475514] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22054
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 01:32:38.876773 1081849 out.go:360] Setting OutFile to fd 1 ...
	I1208 01:32:38.876920 1081849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:32:38.876948 1081849 out.go:374] Setting ErrFile to fd 2...
	I1208 01:32:38.876971 1081849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 01:32:38.877281 1081849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
	I1208 01:32:38.877791 1081849 out.go:368] Setting JSON to false
	I1208 01:32:38.878900 1081849 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":22512,"bootTime":1765135047,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1208 01:32:38.878981 1081849 start.go:143] virtualization:  
	I1208 01:32:38.883503 1081849 out.go:179] * [false-475514] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1208 01:32:38.887655 1081849 out.go:179]   - MINIKUBE_LOCATION=22054
	I1208 01:32:38.887676 1081849 notify.go:221] Checking for updates...
	I1208 01:32:38.893791 1081849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 01:32:38.896821 1081849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
	I1208 01:32:38.899716 1081849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
	I1208 01:32:38.902651 1081849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1208 01:32:38.905577 1081849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 01:32:38.908951 1081849 config.go:182] Loaded profile config "kubernetes-upgrade-614992": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1208 01:32:38.909067 1081849 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 01:32:38.934641 1081849 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1208 01:32:38.934778 1081849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1208 01:32:38.997802 1081849 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-08 01:32:38.987308056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1208 01:32:38.997924 1081849 docker.go:319] overlay module found
	I1208 01:32:39.001250 1081849 out.go:179] * Using the docker driver based on user configuration
	I1208 01:32:39.009335 1081849 start.go:309] selected driver: docker
	I1208 01:32:39.009368 1081849 start.go:927] validating driver "docker" against <nil>
	I1208 01:32:39.009384 1081849 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 01:32:39.013187 1081849 out.go:203] 
	W1208 01:32:39.016405 1081849 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1208 01:32:39.019302 1081849 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-475514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-475514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Dec 2025 01:23:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-614992
contexts:
- context:
cluster: kubernetes-upgrade-614992
user: kubernetes-upgrade-614992
name: kubernetes-upgrade-614992
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-614992
user:
client-certificate: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.crt
client-key: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-475514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-475514"

                                                
                                                
----------------------- debugLogs end: false-475514 [took: 3.366519211s] --------------------------------
helpers_test.go:175: Cleaning up "false-475514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-475514
--- PASS: TestNetworkPlugins/group/false (3.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-895688 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1208 01:36:28.221641  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-895688 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m2.887875761s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-895688 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d272ec09-51df-4e2c-930f-1f518341c347] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d272ec09-51df-4e2c-930f-1f518341c347] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003219531s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-895688 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-895688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-895688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.306065306s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-895688 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-895688 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-895688 --alsologtostderr -v=3: (12.403141074s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-895688 -n old-k8s-version-895688
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-895688 -n old-k8s-version-895688: exit status 7 (78.566823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-895688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-895688 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-895688 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (52.337689304s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-895688 -n old-k8s-version-895688
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-spxsw" [f3f93c1e-6073-4da9-bfd8-4bbb23129e4d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003529458s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-spxsw" [f3f93c1e-6073-4da9-bfd8-4bbb23129e4d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004252899s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-895688 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-895688 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-895688 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-895688 -n old-k8s-version-895688
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-895688 -n old-k8s-version-895688: exit status 2 (379.184722ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-895688 -n old-k8s-version-895688
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-895688 -n old-k8s-version-895688: exit status 2 (336.669232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-895688 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-895688 -n old-k8s-version-895688
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-895688 -n old-k8s-version-895688
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1208 01:39:25.031119  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:39:30.127936  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (50.768130104s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-719683 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [034e4c5c-b4f9-473b-aec5-814b429e2574] Pending
helpers_test.go:352: "busybox" [034e4c5c-b4f9-473b-aec5-814b429e2574] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [034e4c5c-b4f9-473b-aec5-814b429e2574] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003646823s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-719683 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-719683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-719683 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-719683 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-719683 --alsologtostderr -v=3: (12.135687221s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-719683 -n embed-certs-719683
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-719683 -n embed-certs-719683: exit status 7 (89.376937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-719683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-719683 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (53.304958928s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-719683 -n embed-certs-719683
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qlrfq" [1164dcd9-1117-4322-8ab8-2bfa4edded21] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003520542s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qlrfq" [1164dcd9-1117-4322-8ab8-2bfa4edded21] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00351912s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-719683 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-719683 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-719683 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-719683 -n embed-certs-719683
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-719683 -n embed-certs-719683: exit status 2 (353.472132ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-719683 -n embed-certs-719683
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-719683 -n embed-certs-719683: exit status 2 (327.618026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-719683 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-719683 -n embed-certs-719683
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-719683 -n embed-certs-719683
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1208 01:41:28.221290  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (48.697048612s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-843696 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c3eb3073-4596-46b7-925f-7529019aaa8e] Pending
helpers_test.go:352: "busybox" [c3eb3073-4596-46b7-925f-7529019aaa8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c3eb3073-4596-46b7-925f-7529019aaa8e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003891971s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-843696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-843696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006911152s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-843696 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-843696 --alsologtostderr -v=3
E1208 01:42:12.528612  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:12.535033  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:12.546595  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:12.568122  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:12.609555  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:12.691080  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:12.852757  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:13.174564  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:13.816662  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:15.098748  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:17.660191  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-843696 --alsologtostderr -v=3: (12.108247491s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696: exit status 7 (69.614211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-843696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1208 01:42:22.782403  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:33.024131  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:33.210610  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 01:42:53.505416  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-843696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (54.981075141s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zbp7f" [47996b74-92e6-4afd-9b98-33e63a6ff624] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003421165s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zbp7f" [47996b74-92e6-4afd-9b98-33e63a6ff624] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003685371s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-843696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-843696 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-843696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696: exit status 2 (356.328021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696: exit status 2 (370.781677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-843696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-843696 -n default-k8s-diff-port-843696
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-536520 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-536520 --alsologtostderr -v=3: (1.300345542s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-536520 -n no-preload-536520: exit status 7 (73.781736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-536520 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-457779 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-457779 --alsologtostderr -v=3: (1.305381892s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-457779 -n newest-cni-457779: exit status 7 (68.554228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-457779 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-457779 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m18.029693359s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-475514 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pmr9h" [35dc581c-f8c8-482c-9465-3bc994df1ce2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pmr9h" [35dc581c-f8c8-482c-9465-3bc994df1ce2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004143639s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m21.062480906s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hs2kf" [d9988be5-58b9-48bb-9d87-b892b7e8482c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003695891s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-475514 "pgrep -a kubelet"
I1208 02:03:08.726156  846711 config.go:182] Loaded profile config "kindnet-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qncqq" [b833c24c-1579-47d3-8c3f-ff1aa603a15b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qncqq" [b833c24c-1579-47d3-8c3f-ff1aa603a15b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003668028s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.347516976s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-r99mg" [65c64ff9-37e1-426e-9008-9b060abc2063] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-r99mg" [65c64ff9-37e1-426e-9008-9b060abc2063] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003228787s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-475514 "pgrep -a kubelet"
I1208 02:04:44.004686  846711 config.go:182] Loaded profile config "calico-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-47bj5" [2a9905c0-9f3b-47e0-9e76-d470b309b4bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-47bj5" [2a9905c0-9f3b-47e0-9e76-d470b309b4bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.002893267s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.504776107s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-475514 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-98rxv" [d6bb5f8f-f1ae-4f57-abf6-604e0b2e9404] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-98rxv" [d6bb5f8f-f1ae-4f57-abf6-604e0b2e9404] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003802438s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.9949286s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1208 02:07:12.527823  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/old-k8s-version-895688/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:07:33.712211  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/auto-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.421561  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.431788  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.443163  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.464584  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.506080  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.587716  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:02.749210  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:03.070988  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:03.712916  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:04.994600  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:07.556534  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.936529409s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-475514 "pgrep -a kubelet"
I1208 02:08:08.246179  846711 config.go:182] Loaded profile config "enable-default-cni-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4s5c9" [06963bcc-1577-43c8-a339-c5fe3fe71bfd] Pending
helpers_test.go:352: "netcat-cd4db9dbf-4s5c9" [06963bcc-1577-43c8-a339-c5fe3fe71bfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4s5c9" [06963bcc-1577-43c8-a339-c5fe3fe71bfd] Running
E1208 02:08:12.677968  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004265382s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-5vcxl" [b70ae090-945d-47da-940d-cc1c0c859611] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004010105s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-475514 "pgrep -a kubelet"
I1208 02:08:18.809896  846711 config.go:182] Loaded profile config "flannel-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vwpq5" [9c239da8-5fb5-4d55-8943-d30a88a3cb74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1208 02:08:22.920139  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vwpq5" [9c239da8-5fb5-4d55-8943-d30a88a3cb74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004299239s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1208 02:08:43.402087  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 02:08:45.572354  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/no-preload-536520/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-475514 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (42.904436126s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-475514 "pgrep -a kubelet"
E1208 02:09:24.363872  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kindnet-475514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1208 02:09:24.385107  846711 config.go:182] Loaded profile config "bridge-475514": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-475514 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x77k7" [fcc16ec1-fe58-4a89-a287-a7a6acad4cfb] Pending
E1208 02:09:25.031340  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x77k7" [fcc16ec1-fe58-4a89-a287-a7a6acad4cfb] Running
E1208 02:09:30.128065  846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003154876s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-475514 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-475514 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (38/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.16
392 TestNetworkPlugins/group/kubenet 3.57
400 TestNetworkPlugins/group/cilium 4.08
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-076269 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-076269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-076269
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-879407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-879407
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-475514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-475514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Dec 2025 01:23:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-614992
contexts:
- context:
cluster: kubernetes-upgrade-614992
user: kubernetes-upgrade-614992
name: kubernetes-upgrade-614992
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-614992
user:
client-certificate: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.crt
client-key: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-475514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-475514"

                                                
                                                
----------------------- debugLogs end: kubenet-475514 [took: 3.411041754s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-475514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-475514
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-475514 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-475514" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Dec 2025 01:23:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-614992
contexts:
- context:
cluster: kubernetes-upgrade-614992
user: kubernetes-upgrade-614992
name: kubernetes-upgrade-614992
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-614992
user:
client-certificate: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.crt
client-key: /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/kubernetes-upgrade-614992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-475514

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-475514" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-475514"

                                                
                                                
----------------------- debugLogs end: cilium-475514 [took: 3.921692783s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-475514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-475514
--- SKIP: TestNetworkPlugins/group/cilium (4.08s)

                                                
                                    
Copied to clipboard